What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX TITAN; GK110’s Opening Act

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
This is a preview of the GTX TITAN and its many features. For our full review, visit HWC on Thursday February 21st @ 9AM EST

GK110 may be one of the worst-kept secrets of the last year. Gamers knew about it since Kepler was first announced and they’ve been gazing at it longingly ever since. A massively endowed Kepler core has existed for some time but enthusiasts have been fed a constant diet of more efficient, affordable solutions in the form of GK104 (GTX 680, GTX 670 and GTX 660 Ti) and GK106 (GTX 660 and GTX 650 Ti). But it was only a matter of time until NVIDIA adapted their GK110 for the gaming market and the end result is aptly named TITAN.

While NVIDIA’s Kepler-based GeForce 600-series lineup has been primarily based upon smaller cores, the large footprint GK110 equivalents have been exclusively used for Tesla products. For example, the GK110-based Tesla K20 lies at the heart of Oak Ridge’s chart-topping TITAN Supercomputer. That’s about to change with the GTX TITAN, the most powerful and most expensive single core enthusiast-level graphics card the world has ever seen.

GTX-TITAN-18.jpg

By eschewing the standard nomenclature of the GeForce series, NVIDIA has set the GTX TITAN outside the tidy predefined box most graphics cards are usually placed in. The reasons for this decision are quite straightforward: TITAN is unlike any previous GPU. It features a heart destined for supercomputing applications with the soul of a gaming-grade graphics card, and is ultimately meant to create what NVIDIA calls “Gaming Supercomputers”. More importantly, by leveraging GK110’s relatively low power profile, these high end computers can now be created using smaller chassis.

GTX-TITAN-26.jpg

In order to create a class-leading GPU, NVIDIA was able to leverage GK110’s TITANic compute specifications and apply them directly to a core that’s meant for high performance gaming. TITAN’s 2688 CUDA cores and 224 TMUs represent a 75% increase over the GK104 core, while the ROP count has been bumped up by 50%. Meanwhile, the card has received the same 6Gbps GDDR5 modules found in other high end NVIDIA cards but there’s now 6GB on tap which is paired up with a 386-bit memory interface, culminating in an impressive 288.5 GB/s of bandwidth.

With all of this raw performance on tap, one would expect the GTX TITAN to be outrageously power hungry and produce a ton of heat. Thankfully, that just isn’t the case. Since the GK110 core is engineered for HPC environments which demand high efficiency in order to maximize performance per square foot, the TITAN derivative remains surprisingly mild mannered. Naturally, TSMC’s 28nm manufacturing process and TITAN’s lower operating speeds help immensely in this department, and supposedly there’s some additional wizardry going on behind the scenes. NVIDIA is keeping details about this close to their chests, so let’s just say that a TDP of 250W is impressive and keep it at that.

In terms of availability and pricing, it’s a somewhat mixed bag. An SRP of $999 puts TITAN well beyond the reach of most gamers and enthusiasts, but this isn’t a limited edition card which will only have a production run of 1,000 (we’re looking at you ARES II) so availability should be similar to that of the GTX 690. Just don’t cross your fingers hoping that every retailer will receive stock. While EVGA and ASUS are both supporting this launch in North America, TITAN will likely only be offered through primary retail channels like Newegg, Amazon and NCIX.

Since this launch is happening at the tail end of the Chinese New Year, expect some slight delays in regards to retail availability. And we do mean slight since TITAN card should be available to purchase during the week of February 25th with some system builders potentially having cards sooner than that.

GTX-TITAN-10.jpg

Some may be wondering where TITAN lies within NVIDIA’s current lineup since a $999 price is equal to that of a GTX 690, yet in terms of raw performance it will fall short of NVIDIA’s dual card solution. However, the allure of owning the fastest single card on the planet will appeal to many while others will appreciate the TITAN’s compatibility with triple and quad SLI.

This isn’t just about novelty either since there are several reasons why someone should be looking at TITAN rather than a GTX 690. In many ways, the two can’t even be compared due to their associated architectures. The GTX 690 will only live up to its maximum potential in games where SLI provides perfect scaling, and those situations are relatively rare. Meanwhile, the GTX TITAN doesn’t have to worry about pesky dual card profiles or SLI link latency getting in the way of performance. It will also consume significantly less power and output less heat than a GTX 690, allowing for adoption within smaller chassis.

NVIDIA envisions TITAN becoming the go-to solution for system builders who want to pack the maximum amount of performance into SFF cases or simply create the fastest system on the block. Its unique combination of no holds barred performance and relative efficiency makes for an adaptable graphics card for numerous situations from SFF cases to large gaming PCs.

Make no mistake about it though, the GeForce GTX TITAN is immensely expensive and will likely be the brunt of derision for some naysayers in the months ahead. It is architected to be the thoroughbred supercar of the graphics card industry so even though “value” will likely never enter into the equation, TITAN aims to be the pinnacle of modern GPU design.

For the time being, we aren’t allowed to talk about TITAN’s performance (wait for our full review at 9AM EST on February 21st) but there are still plenty of bases to cover in this article.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GK110 Bares All & Adds Double Precision

GK110 Bares All & Adds Double Precision


With Kepler maturing in a number of product spaces, NVIDIA has gradually perfected their manufacturing process, increasing yields and allowing the GK110 to become a bona fide option for the GeForce lineup. However, since Kepler was made with gaming and HPC environments in mind, porting it required very few sacrifices and large blocks of advanced HPC-oriented features have been carried over en masse.

GTX-TITAN-21.jpg

GK110 is by far the largest and most complex GPU NVIDIA has ever built. It is a 7.1 billion transistor monster with a die that measures 551mm², which veritably dwarfs the 294mm² GK104 core and even outsizes the GF110’s 521mm². However, as we’ve already mentioned, this gigantic footprint hasn’t necessarily translated into out of control temperatures or power consumption like it did with GF100. Rather, NVIDIA has kept these variables on a short leash.

GTX-TITAN-20.jpg

From a high level architectural standpoint, the GK110 core is just a supersized GK104 with a whole lot of cores and an additional GPC. Indeed, all Kepler GPUs share the same basic elements which fit together into a cohesive design. The real differences here lie at the SMX level which retains many of the Tesla-centric elements for optimized compute performance.

In its GeForce Titan guise this core incorporates 14 SMX blocks (a fully enabled GK110 houses 15 so one has been disabled, likely to increase yields) each of which holds 192 CUDA cores and 16 texture units for a total of 2688 cores and 224 TMUs. These are split into five GPCs, each of which contains its own Raster Engine. Even though GK104 uses a pair of SMXs per engine, there shouldn’t be any additional overhead since the central processing stages are more than fast enough to ensure the Raster Engines don’t fall behind in their scheduled tasks and bottleneck performance.

As with all of NVIDIA’s architectures dating back to Fermi, the memory controller, ROP structure and L2 cache are tied at the hip, leading to six 64-bit memory controllers which are each paired up with eight ROPs and 256KB of L2 Cache. For more detail about the Kepler architecture, make sure to read our architectural analysis posted in our GTX 680 review.

GTX-TITAN-19.jpg

The largest changes in the GK110 reside in the way it handles compute data. While the SMX layout still includes the PolyMorph Engine’s fixed function stages, 64KB of shared memory, data cache and its associated texture units, the CUDA core layout has been drastically changed. It still houses 192 single precision cores backed up by 32 load/store units and 32 special function units which are able to process 32 parallel threads, but these have been augmented with 64 FP64 Double Precision units.

While the GK104 core did feature Double Precision support, it only included eight units per 192-core SMX, leading to FP64 operations per clock which ran just 1/24th the SP data rate. With TITAN NVIDIA has increased this to 1/3, allowing for 896 concurrent threads to be processed within a single GK110 GPU. In addition, when working in FP64 mode, TITAN will eliminate Boost but also operate at dynamically lower clock speeds.

GTX-TITAN-28.jpg

At face value, the inclusion of full Double Precision functionality may not seem like a major selling point for enthusiasts and truth be told, it isn’t. Games and even applications like Folding@Home simply don’t use the double precision floating point format. Rather, granting access to a $999 FP64 powerhouse makes CUDA development much more accessible since full DP compliance no longer requires a $3000 Tesla K20 or $4500 K20x card. NVIDIA is hoping this will lead to something of a renaissance for CUDA programming and will open up this stage to a whole new beginner-focused market.

Since gamers won’t want to run their card in its 896-core Double Precision mode, NVIDIA has granted easy on/off control over it. Simply change the mode within NVIDIA’s Control Panel to GeForce Titan and you’re off to the races, though at slightly lower clock speeds than if the card where running under full 3D mode.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GTX TITAN Under the Microscope

GTX TITAN Under the Microscope


GTX-TITAN-1.jpg

With its incorporation of high end materials, NVIDIA’s GeForce Titan looks more like the GTX 690 than other members of the GeForce lineup. This shouldn’t be a surprise considering the astronomical $999 price, but the focus here is efficient heat conduction and quiet operation. Plus, it just looks great and only measures 10.5” long, making for an easy fit into nearly any case.

GTX-TITAN-2.jpg
GTX-TITAN-8.jpg

Unfortunately, unlike its dual card sibling, the GTX TITAN doesn’t have any of the fancy materials like a silver trivalent chromium finish or injection molded magnesium alloy. Rather, it uses cast aluminum panels which have been bolted together in order to ensure they don’t divert from their laser-like positioning. There’s also a heat resistant polycarbonate window.

One of the most important aspects of this design is its ability to exhaust hot air outside of the chassis. This allows the TITAN to become a perfect fit for SFF cases which have limited cooling capabilities and were simply overwhelmed by the GTX 690’s axial heatsink design.

GTX-TITAN-4.jpg
GTX-TITAN-5.jpg

The TITAN’s rearmost area is dominated by a secondary heatsink which maximizes the fan’s intake airflow. Power is handled by an 8+6 pin combination, which ensures adequate current while also providing adequate room for overclocking.

GTX-TITAN-9.jpg
GTX-TITAN-15.jpg

For those with windowed cases, NVIDIA has provided a glowing GeForce GTX logo that’s been laser cut into the TITAN’s cast aluminum side panel.

GTX-TITAN-7.jpg

The backplate doesn’t show any surprises with two DVI connectors as well as full-size DisplayPort and HDMI outputs. Luckily, this allows for triple monitor support without using any adapters, though for 3D Vision Surround you’ll need an expensive active DisplayPort to Dual Link DVI dongle.

GTX-TITAN-12.jpg

The TITAN’s primary heatsink is quite extensive and covers the core as well as all front-mounted memory ICs. It uses a closed vapor chamber topped with a dense fin array that is optimized to funnel fresh air quickly and efficiently along with a custom Shin Etsu thermal compound for optimal heat transfer. NVIDIA has also extended the secondary heatsink so it covers nearly all of the VRM components and acts as a PCB stiffener, reducing board flex.

GTX-TITAN-13.jpg

With the heatsink removed, the sheer size of the GK110 core becomes evident alongside the 24 GDDR5 ICs (12 on the front and 12 on the PCB’s underside). On reference boards, NVIDIA has installed an advanced 6+2 phase all-digital PWM layout which should provide adequate and stable board current.

GTX-TITAN-6.jpg

Finally, the board’s underside reveals a non-standard heatsink bolt pattern which hasn’t been used by any past NVIDIA card. It seems like this PCB layout has been lifted directly from the Tesla version since it features a second 8-pin power connector location which is utilized on some K20x cards.
 
Last edited by a moderator:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GPU Boost 2.0 Explained & Tested

GPU Boost 2.0 Explained & Tested


When the GTX 680 was introduced, GPU Boost quickly became one of its most talked-about features. At its most basic, NVIDIA’s GPU Boost monitored the power requirements of your graphics card and dynamically adjusted clock speeds in order to keep it at a certain power target. Since most games don’t take full advantage of a GPU’s resources, in many cases this meant Kepler-based cards were able to operate a higher than reference frequencies.

In order to better define this technology, NVIDIA created a somewhat new lexicon for enthusiasts which was loosely based upon Intel’s current nomenclature. Base Clock is the minimum speed at which the GPU is guaranteed to operate at under strenuous gaming conditions, while Boost Clock refers to the average graphics clock rate when the system detects sufficient TDP overhead. As we saw in the GTX 680 review, the card was able to Boost above the stated levels in non-TDP limiting scenarios but the technology was somewhat limited in the way it monitored on-chip conditions. For example, even though the ASIC’s Power Limit could be modified to a certain extent, the monitoring solution took TDP as a relative term instead of factoring in additional (and essential) items like temperature.

GTX-TITAN-15.gif

In an effort to bypass GPU Boost’s original limitations, the GPU Boost 2.0 available on Titan will use the available temperature headroom when determining clock speeds. This should lead to a solution that takes into account critical metrics before making a decision about the best clocks for a given situation. The Power Target is also taken into account but you can now tell EVGA Precision or other manufacturers’ software to prioritize either temperatures or power readings when determining Boost clocks.

While some other technologies described in this article will be eventually find their way into Kepler and even Fermi cards, GPU Boost 2.0 will remain a GK110 exclusive.

GTX-TITAN-22.jpg

In order to better demonstrate this new Boost Clock calculation, we ran some simple tests on a GeForce TITAN using different temperature targets while running at a 107% Power Offset. For this test we used EVGA’s Precision utility with priority on the Temp Target.

GTX-TITAN-27.jpg

The success of GPU Boost 2.0 is becomes readily apparent when different temperature targets and their resulting clock speeds are compared against one another. At default, the TITAN is set to run with a target of 80°C and a relatively pedestrian fan speed of between 35% and 45%, making it very quiet.

As we can see, voltage and clock speeds steadily decrease as the target is lowered. This is because the built-in monitoring algorithm is trying to strike a delicate balance between maximizing clock speeds and voltage, while also taming noise output and temperatures. With a TDP of some 250W, accomplishing such a feat isn’t easy at lower temperatures so Boost clocks are cut off at the knees in some cases.

Increasing the Power Target to above 80°C has a positive impact, up to a certain extent. Since there are some limits imposed at GK110’s default Boost voltage (1.162V in this case), clock speeds tend to plateau between 990MHz to 1GHz without modifying the GPU Clock Offset or core voltage. Yes, the GK110 does indeed support voltage increases but more on that in the next section.

This new GPU Boost algorithm rewards lower temperatures which will be a huge boon for people using water cooling or those willing to put up with slightly louder fan speeds. Simply keeping the Temp Target at its default 80°C setting and cooling the core to anything under that point will allow for moderately better clock speeds (up to 1GHz in our tests) with a minimum of effort. If you’re the enterprising type, a combination of voltage and a higher GPU Offset could allow a better cooling solution to start paying dividends in no time. Also remember that ambient in-case temperatures play a huge part in GPU Boost’s calculation so ensuring a well ventilated case could lead to potential clock speed improvements.

GTX-TITAN-16.gif

As you can imagine, mucking around with the temperature offset could potentially have a dramatic effect upon fan speeds but NVIDIA has taken care of that concern. In the new GPU Boost, their fan speed curves dynamically adjust themselves to the new Temperature Target and will endeavor to remain at a constant frequency without any distracting rotational spikes. There’s hope the new algorithm will reduce TITAN’s acoustical footprint regardless of the temperature target.
 
Last edited by a moderator:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Kepler’s Overclocking: Overhauled & Over-Volted

Kepler’s Overclocking: Overhauled & Over-Volted


One of the major criticisms leveled at the early Kepler cards like the GTX 680 was their bewildering lack of core voltage control. Some board partners eventually added the capability to modify their cards’ voltage values but they were quickly removed in favor of a supposedly “safer” approach to overclocking. Enthusiasts weren’t happy and neither were NVIDIA’s partners since they had to eliminate certain advertised features from their products.

With GK110, voltage control is making a return….to a certain extent, and regrettably it won’t cascade down to lower-end SKUs like the GTX 680. NVIDIA will be allowing voltage changes, but the actual upper limits will remain under strict control and could be eliminated altogether on some products should a board partner decide to err on the side of caution.

GTX-TITAN-28.gif

On the previous page, we detailed how GK110’s Boost Clock is largely determined by a complex calculation which takes into account temperatures, clock speeds, fan speeds, Power Target and core voltage. As temperatures hit a given point, Boost 2.0 will adapt performance and voltage in an effort to remain at a constant thermal point. This methodology remains in place when using GK110’s expanded overclocking suite. However, in this case, the application of additional voltage will give Boost another “gear” so to speak, allowing for higher clock speeds than would normally be achieved.

GTX-TITAN-29.gif

Since Over Voltage control falls under the auspices of GPU Boost 2.0, the associated limitations regarding thermal limits and their relation to final in-game clock speeds remain in place. Increasing voltage will of course have a negative impact upon thermal load which could lead to Boost throttling performance back until an acceptable temperature is achieved. Therefore, even when a voltage increase is combined with a higher Power Limit and GPU Clock Offset, an overclock may still be artificially limited by the software’s iron clad grip unless temperatures are reigned in. This is why you’ll likely want to look at improving cooling performance before assuming an overclock will yield better results or, while not recommended, expand the Temperature Target to somewhere above the default 80°C mark.

GTX-TITAN-23.jpg

GTX TITAN’s maximum default voltage is currently set at 1.162V which results in clock speeds that (in our case at least) run up to the 992MHz mark, provided there is sufficient thermal overhead. EVGA’s Precision tool meanwhile allows this to be bumped up to at most 1.2V (a mere .038V increase) which likely won’t result in a huge amount of additional overclocking headroom. However, while the maximum Boost clock may not be significantly impacted by the additional voltage, it should allow the TITAN to reach higher average speeds more often, thus improving overall performance. Also expect other board partners to have different maximum voltages set in their VBIOS.

In order to put NVIDIA’s new voltage controls to the test, we ran the TITAN at default clock speeds (GPU and Memory offsets were pegged at “0”) with the Power Target set to 106% and keeping the core temperature at a constant 75°C versus a Temp Target of 80°C. In practice, this should allow that extra .038V to push the maximum Boost speed to another level.

GTX-TITAN-29.jpg

While the frequency increase we’re seeing here is rather anemic, NVIDIA’s inclusion of limited control over core voltage should be welcomed with open arms. Regardless of the end result, seeing a card like the TITAN operating above 1GHz is quite impressive. Some enthusiasts will likely throw their noses up at the severe handicap placed upon the maximum allowable limits but any more could negatively impact ASIC longevity.

In addition, don’t expect the same support from every board partner since NVIDIA hasn’t made the inclusion of voltage control mandatory, nor is the 1.2V maximum set in stone. Some manufacturers may simply decide to forego the inclusion of voltage modifiers within their VBIOS, eliminating this feature altogether or pushing past the 1.2V threshold. So do your research before jumping onto the bandwagon of a particular GTX TITAN model because true voltage control may be a rarity.

The real success or failure of NVIDIA’s voltage tool will ultimately be in the hands of enthusiasts who already feel left out in the cold after GK104’s limitations. To them, this may just be too little too late.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA Introduces Overclocking….for Monitors

NVIDIA Introduces Overclocking….for Monitors


In the world of display technology, gamers continually find themselves fighting a losing battle when deciding between the image quality afforded by V-Sync and the relative smoothness of high framerates. No one wants their high end graphics card to be automatically capped by the screen’s refresh rate but running a panel and framerate asynchronously can lead to image tearing.

In the past, there were a number of solutions to this problem: either buy a monitor that’s capable of running at a higher refresh rate or artificially increase the pixel clock at which your GPU sends information to any connected displays. The latter route typically led to jumping through no small number of hoops but it allowed users to run their monitors at a higher refresh rate, thus capitalizing upon performance above 60FPS without offering up V-Sync like a sacrificial lamb.

GTX-TITAN-17.gif

NVIDIA is about to make this process a whole lot easier with their Display Overclocking feature which is set to be released with the latest overclocking software from EVGA, MSI and others. It allows a user to easily increase the pixel clock which in turn dynamically expands the refresh rate options for your monitor.

Now before you go cringing at the thought of ruining a brand new $700 27” panel, there should be very little to worry about here. If a given display isn’t compatible with a set pixel clock one of two things will happen: it will either go blank until the software resets in a few seconds or sub-pixel artifacts will appear but the possibility of damaging internal components is infinitesimal. Naturally, any overclocking is done at your own risk, but the benefits here could be enormous for gamers that have a compatible monitor and want a blend of image quality and performance.

The Display Overclocking feature will also work with NVIDIA Adaptive V-Sync by simply stepping down to an associated refresh speed to avoid stutter whenever large framerate discrepancies are detected.

GTX-TITAN-24.jpg

EVGA’s Pixel Clock OC tool is buried within Precision’s Bundle folder in the program’s install directory and is quite straightforward. Just put that slider to good use by increasing the Pixel Clock until the display’s signal cuts off or minor sub-pixel artifacts begin to appear. Once that happens, scale back by about 10% and retest.

Unfortunately, other than VESA standards, there aren’t any conversion tables to figure out the “best” Pixel Clock output for all monitor types so a bit of trial, error and research is in order here.

GTX-TITAN-25.jpg

Not all displays are capable of supporting higher than reference pixel clocks. For example, my Acer GD235HZ wouldn’t be pushed one iota past its 120Hz mark but the trusty Samsung SyncMaster 305T (a 60Hz monitor) hit the 80Hz mark with a Pixel Clock of 360 MHz. Actually hitting the 80Hz / 80FPS mark on a 2560x1600 monitor isn’t easy but the new refresh was recognized in every game currently installed onto the test system.
 
Last edited by a moderator:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
SFF With Attitude: Digital Storm’s Bolt Titan Edition

SFF With Attitude: Digital Storm’s Bolt Titan Edition


While the PC gaming world is actually expanding at a moderate pace, its platform of choice is in the midst of some significant changes. Space devouring full tower cases may still be popular but their position is rapidly making way for smaller form factors. In the past year or so we’ve seen a gradual shift away from the usual ATX and E-ATX boards with more and more gamers choosing compact yet fully capable M-ATX and ITX offerings. Simply put, with square footage at a premium, people are beginning to take a long, hard look at previously atypical form factors.

GTX-TITAN-100.jpg

Building a true high performance gaming PC within an SFF chassis presents a unique set of challenges. These small cases just don’t have the cooling capacity or internal space enjoyed by their ATX siblings so hardware selection tends to be rather limited, particularly on the GPU side. With the GeForce GTX TITAN, NVIDIA is aiming to ditch these preconceptions by offering a gaming powerhouse which is relatively compact, promises unheard of performance from a single core card and won’t dump a ton of hot air into its immediate vicinity.

Digital Storm have taken this idea and incorporated it into their new Bolt Titan Edition system.

GTX-TITAN-104.jpg

The Bolt may represent Digital Storm’s smallest system but it strikes an absolutely striking figure. Slightly monolithic in appearance, it comes with a high quality white or black exterior finish, plenty of well-integrated ventilation areas and an exceedingly clean front façade. More importantly, the Bolt is barely taller than a bottle wine and boasts a footprint which is no longer or wider than a half-folded piece of 11x17 tabloid-size paper.

GTX-TITAN-101.jpg
GTX-TITAN-102.jpg

Some expect quality on these prebuilt chassis to suffer but Digital Storm’s Bolt features some of the best construction we’ve seen in a while. Its metal panels don’t exhibit any flexing, material joints are minimal and the entire affair sits on a weighted, stable base.

GTX-TITAN-106.jpg

The external design may be stunning but the Bolt’s real claim to fame lies within. Packed into the relatively constricting inner confines is a set of components which would make most other systems green with envy. An Intel 3770K and 16GB of memory have been installed onto an ASUS P8Z77-I Deluxe ITX motherboard alongside a 1TB hard drive and 128GB Corsair Neutron GTX SSD running as a boot drive. There’s also a 500W FSP-branded power supply with 80-Plus Gold certifications, a pre-installed WiFi card and a slot loaded DVD drive. Is there space to access your components? Not without giving this thing a frontal lobotomy but that’s to be expected when working with an SFF chassis.

GTX-TITAN-105.jpg

Perched above this on a PCI-E riser card and separated from the power supply by a thick metal panel is NVIDIA’s GTX TITAN. According to Digital Storm, the TITAN has allowed them to deliver maximum performance without sacrificing internal temperatures or acoustics. Ironically, packing a massively powerful graphics card into the Bolt isn’t all that unique since this compact chassis has been offered with a GTX 690 for the last few months.

As you can imagine, the price for this setup is astronomical but at $2500, it is surprisingly affordable considering the performance and convenience it offers in comparison to larger, similarly equipped systems.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The Non-Conclusion

The Non-Conclusion


So now that we’ve gone through an entire in-depth article about NVIDIA’s GeForce TITAN and what it will offer in the feature department, let’s discuss the 1000lb gorilla in the room: performance. Everyone wants to see it but unfortunately, that won’t be forthcoming just yet. In an effort to give editors some much-needed time benchmarking this card, NVIDIA has requested that full reviews be posted on Thursday, February 21st at 9AM EST (6AM PST).

While we may not be able to talk about performance, there are plenty of new features to discuss, some of which could be game-changers while others may unfortunately go overlooked.

Easy Display Overclocking is one of those items which many gamers may gloss over but it could breathe new life into many displays. By utilizing manufacturer-built software, it is easy to implement, worked without a hitch on our 30” display and will eventually be available on other NVIDIA graphics cards as well. Display Overclocking and its Adaptive V-Sync cousin are excellent ways to get the most out of your gaming experience without worrying about screen tearing or the stuttering which occurs when V-Sync and framerates run asynchronously.

The new GPU Boost 2.0 adds some welcome features to a technology which was far too focused upon hitting a predetermined Power Target while ignoring other factors. Putting the emphasis on temperature targets allows for a much more adaptable, situationally aware solution. It rewards better cooling methods with higher clock speeds and is able maintain a mild mannered acoustical profile.

GTX-TITAN-30.jpg

GPU Boost 2.0 includes overvoltage controls as well but we’re on the fence about those. While we appreciate the bone being thrown to the overclocking community, its usefulness was seriously curtailed by EVGA’s unfortunate 1.2V VCore limit. In addition, the actual application of this extra voltage and any associated clock speed increases isn’t guaranteed unless the GPU is playing within NVIDIA’s constrained, fenced-in yard. Any attempt to proceed beyond those predetermined values will result in a swift slap-down of core frequencies and performance.

While full speed Double Precision support may not be of interest to gamers or many enthusiasts, its inclusion could help foster a budding CUDA development community. Now at-home programmers won’t have to shell out spectacular amounts of money for high end GPGPU performance and that should allow innovative, home-grown talent to shine.

The GeForce GTX TITAN looks like it will be the fastest single GPU graphics card available for the foreseeable future, but just by how much remains to be seen. The HD 7970 GHz Edition is arguably the industry’s top dog right now, providing excellent performance and even the GTX 680 could play the role of launch spoiler based on price alone. More importantly, will gamers be willing to shell out $999 for what TITAN offers? Make sure you check back on Thursday for the answers.
 
Last edited by a moderator:

Latest posts

Top