Gigabyte GTX 1080 G1 Gaming Review

Editor-in-Chief

With custom GTX 1080 cards finally making their way onto the market after an NVIDIA-imposed exclusivity for their Founders Edition, we are finally able to better judge the types of cards that will be getting into most gamers’ hands. While we’ve already covered EVGA’s Superclocked ACX 3.0, there are plenty of other options right now among them Gigabyte’s new GTX 1080 G1 Gaming. Like many other non-reference GTX 1080 cards, the G1 Gaming is not only more affordable than NVIDIA’s Founders Edition but it also comes with a long list of features and some impressive clock speeds.Like many of NVIDA’s other board partners Gigabyte has a whole lineup of GTX 1080’s ready to offer buyers specific combinations of features and performance. The $649 G1 Gaming in this particular review sits within the middle of their lineup, being considered a higher performance (yet lower priced) option than the aforementioned Founders Edition yet featuring clock speeds that are lower than the Xtreme Gaming and Xtreme Gaming Water Cooled. All in all it is supposed to compete directly against the likes of ASUS STRIX OC, EVGA’s Superclocked ACX 3.0 and MSI’s Gaming 8G. Among such illustrious company, the G1 Gaming will have a tough time standing out.

From a raw specifications perspective the Gigabyte GTX 1080 G1 Gaming offers up a Based and Boost clock that notably exceeds NVIDIA’s reference specifications. As a matter of fact, those 1751MHz and 1860MHz Base / Boost frequencies make this card one of the fastest at its respective price point. Compared against the Founders Edition we expect the gap to be even greater since Gigabyte’s advanced Windforce 3X heatsink should help keep those frequencies consistent in situations where the FE struggled. Unfortunately, memory speed remain at 10Gbps but this was likely done to insure adequate yields on this particular SKU.

While there may not be all that much from a clock speed perspective to differentiate the G1 Gaming from a card like the Superclocked ACX 3.0, it does offer numerous benefits that EVGA’s Superclocked card doesn’t. There’s a fully controllable RGB LED, a custom PCB with upgraded components and a binned core which is supposed to offer more overclocking headroom. Many of those items require an “upgrade” to the more expensive FTW or Classified versions within EVGA’s lineup so even though we can’t consider a $649 graphics card inexpensive, this one does have some potential value packed into its frame.

One thing to note is that Gigabyte offers three distinct speeds for their GTX 1080 G1 Gaming, all of which can be accessed through the Xtreme Engine App. If the card is installed without this software, it will run at Gaming speeds which equates to 1695MHz / 1835MHz for the Base and Boost frequencies respectively while Gigabyte doesn’t publish the ECO Mode running speeds. OC meanwhile is highlighted in the chart above and it is the setting we will use for all of the tests. Supposedly each of these include different fan speed profiles as well so check on the next page for more information about that.

Now is actually a good time to talk about this Xtreme Engine application as well since it represents a massive departure from the Gigabyte OC Guru we have been seeing for the last few generations. Whereas the last few ODSs could be charitably called “clunky”, this one is sleek, well designed and highly intuitive. There’s a section with full control over the single backlit RGB logo, a dedicated fan profile area and a user friendly Overclocking tab.

Unlike previous software there’s no hidden settings or oddball toggle switches; everything is right there, ready to be used. I actually appreciate its functionality more than the new Precision but it is missing EVGA’s straightforward per-point voltage modifiers and instead uses a confusing curve-like approach which didn’t seem to have any discernible impact upon achievable overclocks.

Gigabyte first introduced their Windforce 3X heatsink years ago and since then it has gone through several evolutions, from oversized to sleek to a bit overdesigned. On the façade at least this latest iteration is actually a pretty significant departure from the one which showed up on their GTX 980 Ti G1 Gaming. Whereas that card included a minimalist black and white shroud, the GTX 1080 version utilizes an all-encompassing, predominantly black cover with a few red highlights. It is much less distinctive than EVGA’s current designs but I think it is much more likely to fit a broader range of builds.

The new design direction here makes this card look more like it belongs in ASUS’ Republic of Gamers lineup rather than something which was cooked up in Gigabyte’s kitchen. However, there’s a good reason for the red / black color scheme: it is used quite extensively in Gigabyte’s own Gaming motherboard lineup.

In my opinion, the Windforce 3X cooler has historically been one of the best-regarded heatsinks around. By utilizing a large horizontally-placed heatsink which runs the card’s entire length, Gigabyte has been able to effectively maximize dissipation while also insuring their direct touch heatpipe base has enough thermal mass above it to keep up with the core’s output. That base has been revised this time around so its heatpipes all make contact with the smaller GP104 core.

No massive heatsink would be complete without high performance fans and the ones here have been engineered to maximize downwards airflow without requiring fast rotational speeds. They have a unique triangular edge and thin strips at their mid-points, helping grant a 23% airflow increase over traditional units. Whether or not that will have a positive effect upon cooling performance remains to be seen.

There is one sacrifice with this heatsink: it is a bit longer than the PCB. That means the Gigabyte GTX 1080 G1 Gaming about 11 ¼” in length which shouldn’t cause installation problems into most ATX and larger cases but it is nonetheless something to take note of if you are looking for a more compact chassis.

Along the card’s outer edge there’s a single backlit LED Gigabyte logo as well as a small illuminated Fan Stop sign which lights up to tell users there’s nothing actually wrong when the fans grind to a halt in low-load situations. The single 8-pin power input is also located here.

Unlike other cards in this price range, Gigabyte has gone for the full monty on their G1 Gaming and integrated a heavily upgraded 8+2 phase all-digital PWM, a custom PCB with a 2oz copper layer for better heat distribution and even long life capacitors. Many of these components interface directly with the heatsink through integrated heat pads to insure lower operating temperatures, especially when the card is overclocked.

Gigabyte’s backplate is a straightforward yet functional affair without any of the grilles or fancy graphics of some competing solutions. However, it gets the job done and all of the mission critical components located on this side of the PCB should benefit from its additional heat dissipation characteristics.

Gigabyte’s rear I/O area design is an interesting one but the connectors here are simply based off of the reference design. There’s a trio of DisplayPort 1.4 outputs, a single HDMI 2.0b and one DVI-D connector. The departure from standardization is evident when you look at the grille: whereas many competitors are maximizing its openings in the hope that some hot air will be exhausted here, Gigabyte has gone for a distinctive slotted design. Whether or not this will help or impede outwards air movement is anyone’s guess.

Test System & Setup

Processor: Intel i7 5960X @ 4.3GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro

Drivers:
AMD Radeon Software 16.5.2
NVIDIA 368.14 WHQL

*Notes:

– All games tested have been patched to their latest version

– The OS has had all the latest hotfixes and updates installed

– All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Analyzing Temperatures & Frequencies Over Time

Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.

Gigabyte’s card is also somewhat unique since it can inherently run at one of three preset speeds, provided the Xtreme Engine utility is installed. Below you will see ECO, Gaming and OC all highlighted. Also remember that the Gaming setting is what the GTX 1080 G1 Gaming defaults to if you don’t have the software installed and the preset OC selected.

Temperatures between Gigabyte’s different presets are basically identical with all topping out at 70°C give or take a degree or two. Basically every one of the modes strives to attain 70°C, simply fluctuating fan speeds or frequency output in an effort to hit this level. This results in long term temperatures that are a bit lower than the ones achieved by EVGA’s GTX 1080 Superclocked, but not by much.

Fan speed differences between the Gaming and ECO is virtually nil with the only exception being ECO tends to hold the idle fan state a bit longer. OC Mode on the other hand sees its rotational speeds hitting about 150RPMs higher than the other two as the test progresses. One note of concern here is that even though the new WindForce 3x cooler has a trio of fans and a massive heatsink, it seems to be a bit inefficient when compare directly against EVGA’s competing solution. While the ACX 3.0 does have slightly higher temperatures, it required much lower fan speeds to attain its results.

Once again the differences between the Gaming and ECO modes are nonexistent with frequencies hovering between the 1885MHz and 1873MHz marks. The OC Mode does exhibit a bit more fluctuation but it is indeed consistently higher than the other two settings, though only by about 50MHz. To me, this makes Gigabyte’s card a dual-mode solution rather than the triple modes their Xtreme tuning utility would lead you to believe.

Actual performance is consistent across the board with the 50MHz boost granted to the OC mode accounting for (at most) a 2FPS benefit across the board. The other two modes perform in-line with what the EVGA GTX 1080 Superclocked could achieve.

Ashes of the Singularity

Ashes of the Singularity is a real time strategy game on a grand scale, very much in the vein of Supreme Commander. While this game is most known for is Asynchronous workloads through the DX12 API, it also happens to be pretty fun to play. While Ashes has a built-in performance counter alongside its built-in benchmark utility, we found it to be highly unreliable and often posts a substantial run-to-run variation. With that in mind we still used the onboard benchmark since it eliminates the randomness that arises when actually playing the game but utilized the PresentMon utility to log performance


Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.


Far Cry 4

This game Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.


Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]


Star Wars Battlefront

Star Wars Battlefront may not be one of the most demanding games on the market but it is quite widely played. It also looks pretty good due to it being based upon Dice’s Frostbite engine and has been highly optimized.

The benchmark run in this game is pretty straightforward: we use the AT-ST single player level since it has predetermined events and it loads up on many in-game special effects.


The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.


Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.


Ashes of the Singularity

Ashes of the Singularity is a real time strategy game on a grand scale, very much in the vein of Supreme Commander. While this game is most known for is Asynchronous workloads through the DX12 API, it also happens to be pretty fun to play. While Ashes has a built-in performance counter alongside its built-in benchmark utility, we found it to be highly unreliable and often posts a substantial run-to-run variation. With that in mind we still used the onboard benchmark since it eliminates the randomness that arises when actually playing the game but utilized the PresentMon utility to log performance


Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.


Far Cry 4

This game Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.


Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]


Star Wars Battlefront

Star Wars Battlefront may not be one of the most demanding games on the market but it is quite widely played. It also looks pretty good due to it being based upon Dice’s Frostbite engine and has been highly optimized.

The benchmark run in this game is pretty straightforward: we use the AT-ST single player level since it has predetermined events and it loads up on many in-game special effects.


The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.


Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.


Ashes of the Singularity

Ashes of the Singularity is a real time strategy game on a grand scale, very much in the vein of Supreme Commander. While this game is most known for is Asynchronous workloads through the DX12 API, it also happens to be pretty fun to play. While Ashes has a built-in performance counter alongside its built-in benchmark utility, we found it to be highly unreliable and often posts a substantial run-to-run variation. With that in mind we still used the onboard benchmark since it eliminates the randomness that arises when actually playing the game but utilized the PresentMon utility to log performance


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]


Ashes of the Singularity


Hitman (2016)


Quantum Break


Rise of the Tomb Raider


Thermal Imaging

Typically we use thermal imaging to pick up any heat buildup issues but it can only tell us so much if most of the card is covered by heatsinks as is the case with Gigabyte’s G1 Gaming. From what we can tell however, there aren’t any areas of immediate concern.

Acoustical Testing

What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Rise of the Tomb Raider is used to generate a constant load on the GPU(s) over the course of 15 minutes.

As you may have already deduced from the fan speed results published way back at the beginning of this review, the GTX 1080 G1 Gaming isn’t a isn’t a quiet card by any stretch of the imagination regardless of which mode is activated. While the ECO and default Gaming modes provide completely acceptable acoustical levels, they are still noticeably higher than EVGA’s competing Superclocked with its ACX cooler.

The OC Mode on the other hand sacrifices fan speeds for higher core frequencies and performance. This is obviously the preset you’ll want if noise isn’t a concern.

One thing we need to mention is through the Xtreme Engine software these values are completely modifiable and can offer as quiet or loud an environment as someone would want. However, if you do decide on a more silent experience, be prepared to experience lower framerates.

System Power Consumption

For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

When it comes to power consumption, there’s a bit of a surprise. Despite its upgraded non-reference design, lower temperatures and similar clock speeds in Gaming and ECO modes, the Gigabyte GTX 1080 G1 Gaming isn’t any more efficient than EVGA’s GTX 1080 Superclocked. This could be due to anything from core variance to higher power draw for the WindForce 3X’s three fans. Naturally, OC mode moves things up a bit but under no circumstance could you consider this card to be anything but highly efficient compared to the previous generation’s offerings.

Overclocking Results

With strictly capped voltage / power limits, Pascal cards have proven to be somewhat unreliable overclockers here at Hardware Canucks. Our Founders Edition sample hit a stable frequency of 2126MHz while the more recently reviewed EVGA GTX 1080 Superclocked topped out at 2113MHz. Personally I had high expectations for the Gigabyte card since it is formidably built, uses a custom PCB design, houses upgraded components and includes the newly refreshed Xtreme Engine utility. Unfortunately, that just wasn’t meant to be.

The maximum overclock achievable core overclock was 2050MHz placing the G1 Gaming well behind both of the other two samples I have in-hand. The PerfCap reason tells the story of why this is: the core on this particular sample obviously needs more voltage to run at higher clocks than Gigabyte’s utility can provide. As you can see, even that meager Power Limit increase of just 8% didn’t seem to hinder it in any way. That’s a bit disappointing but we can’t forget that overclocking headroom is a lottery with some samples being better than others. Considering the minimal difference between this overclock and the OC Mode preset which saw the card running just 100MHz slower, I’d call our sample a “plug and play” card which doesn’t need to be overclocked to hit optimal performance.

Memory overclocking was a bit better but it too fell short of the EVGA card with a final speed of just under 11Gbps. Considering Pascal cards aren’t memory limited in the least, this result likely won’t increase framerates all that much.

Conclusion

When it was first launched NVIDIA’s GTX 1080 took the GPU market by storm and I find myself continually impressed with what it has to offer. While not inexpensive by any stretch of the imagination, these cards are exceedingly fast, efficient and well positioned for future gaming environments. Meanwhile with cards like the Gigabyte GTX 1080 G1 Gaming, NVIDIA’s board partners keep showing us why the Founders Edition ends up feeling overpriced by comparison. Not only is the G1 Gaming faster but it also happens to cost less and offer a ton more customization options for end users.

In comparison to the Founders Edition Gigabyte hits things straight out of the park. Not only do their clock speed increases make a sometimes-noticeable difference in terms of framerates but the G1 Gaming’s frequencies are delivered in an extremely consistent manner. The same can’t be said about NVIDIA’s reference board which ended up getting all loose in the knees as its cooler didn’t react to temperature increases.
Due to their identical $649 price tags there will invariably be parallels drawn between the GTX 1080 G1 Gaming and EVGA’s GTX 1080 Superclocked. From a personal perspective I don’t think Gigabyte wins in that fight even if we ignore its overclocking results and the variability that arises from one sample to another in that respect. This card does have features that are lacking on the Superclocked such as a controllable RGB LED, preset modes for novice-friendly performance / noise output modifications and a custom PCB. If you are looking for a high end GPU that can fit with your case’s internal lighting scheme and don’t want to spend a few extra bucks for EVGA’s FTW, this might be a great choice.

Where things go slightly awry is with some of those selfsame features. While Gigabyte does offer limitless fan profile changes, plug and play users will find the G1 Gaming to be noticeably louder than the Superclocked regardless of which preset they chose. This isn’t due to a lack of cooling mass either since the Windforce 3X heatsink leads to a large footprint, one that’s ¾” longer than EVGA’s. Now granted those higher fan rotational speeds do lead to great temperature results but there’s no denying acoustics were offered up on a sacrificial altar here.

Speaking of the three preset modes, I absolutely love the idea behind them since they should grant one-click modification of the G1 Gaming’s various parameters. “Should” is the operative word here since in practice there were literally no measurable differences between the ECO and Gaming modes. Now OC Mode is quite a different animal since it did boost clock speeds and performance as promised but I would have liked to see a even bit of variation between the two lower two software gears. This could very well be a case of mistaken identity too since Gigabyte may not have actually intended this card to have an ECO mode, despite it being present and selectable in their utility.

Special mention also has to be made about Gigabyte’s new Xtreme Engine Application as well since it is literally light years ahead of its predecessor. OC Guru was competent once you realized there were hidden menus aplenty and a generalized quirkiness in the way it went about things but the Xtreme Engine is quick, features a sleek GUI, is user friendly and loaded with must-have features. It really is amazing to see a piece of software evolve from archaic to class leading in one short generation but that’s exactly what Gigabyte has accomplished here.

The Gigabyte G1 Gaming is an awesome graphics card that is loaded with features that will appeal to novices and longtime enthusiasts alike. While it may be louder and longer than some competing solutions, if you are willing to delve into the intuitive software the former problem becomes a non-issue while the latter won’t be a problem for anyone building a system into many new chassis. If you are in the market for a GTX 1080, this one should be near the top of your list.

Posted in

Latest Reviews