What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

GTX 800M; NVIDIA’s Maxwell Goes Mobile

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
NVIDIA’s introduction of the Maxwell-based GTX 750 Ti represented a turning point for the desktop GPU market, moving it towards a new level of performance without an associated increase in power consumption. If anything, the GTX 750 Ti blew preconceptions away and redefined what could be achieved within a low wattage framework. These advances are now partially making their way into the notebook market with the GTX 800M’s introduction.

The notebook segment has been quite volatile as of late with a large number of users gravitating towards the “good enough” approach offered by inexpensive tablets and, to a lesser extent, the latest crop of superphones. With a simple wireless keyboard, a $300 tablet can be converted from a basic media consumption device to a pretty capable platform for content creation. As a result, sales of some traditional low and mid range notebook categories are suffering while Ultrabook sales have been decimated.

This might sound like an odd preamble since by all indications the downturn in notebook interest should sound a death knell for stand-alone mobile GPUs, right? Not so fast. While the segments that include basic entry level systems may be in trouble, recent studies have shown that PC gamers are keeping both the desktop and notebook markets humming. The potential for sales of those specifically targeted notebooks represents a potential cash cow and builders are rushing to adapt. As we saw at CES, more and more big-name companies are now refocusing of their efforts on gaming-oriented systems while slimming down any mid range offerings being savaged by tablets.


This is where the 800M series factors into the equation. It represents a full court press in every part of the notebook market but there’s one common thread running throughout the new product stack: these new GPUs are made for the new realities of today’s demanding users. While outright performance is still essential, equal amounts of development have been put into lowering power consumption, boosting battery life and optimizing the way the core architecture interfaces with other components.

With the GTX 8xxM and GT 8xxM, there's truly something for everyone; from slim and light notebooks with a perfectly capable graphics subsystem to higher end gaming monsters with 17” screens and a bevy of other features. However, Maxwell’s introduction largely focuses on the former situations while NVIDIA’s Kepler is coming back for an encore in specifically tailored “gaming first” applications.

As you will see in the chart below, several of these new GPUs look like simple rebrands of NVIDIA’s GTX 700M series but there are some major differences as well. Through the use a refined 28nm manufacturing process and other on-die improvements, NVIDIA has managed to offer much higher clock speeds without an associated increase in TDP. Several other features have been added as well but we’ll get into those later.


Sitting at the top of NVIDIA’s lineup is the GTX 880M, an updated version of the Kepler-based GTX 780M. It features 1536 CUDA cores, a 256-bit memory bus and up to 4GB of GDDR5 operating at 5Gbps which, on paper at least, nearly equals the desktop GTX 770’s specifications. There are several similarities between the previous generation flagship and this one but we can’t consider this a straight up rebrand since the GTX 880M’s performance has received a substantial boost through higher Base and Boost frequencies.

Another GK104 carryover is the GTX 870M which shares its core layout with the GTX 775M, but also uses much faster memory and greatly increased engine clocks. From a price / performance standpoint, we’ll likely see this become one of the more prevalent GPUs in this new lineup.

The GTX 860M is where things start to get interesting since it’s where Maxwell begins to see the light of day. Technically, there will be two SKUs of this GPU: a Kepler-based GK104 part for standard notebooks and the efficient Maxwell for thin and light gaming applications. Even though there’s a massive specification discrepancy between these two cores, both feature approximately the same performance due to Maxwell’s architectural enhancements and its much higher core speeds. We’d also image the Maxwell version will cost partners slightly more due to its compatibility with smaller chassis.

Rounding things out is the GTX 850M, another mobile part which uses the GM107 core. Its specifications are similar to the GTX 860M but lower core clocks should lead to a performance difference of about 20% and substantially lower power consumption.


NVIDIA is claiming some impressive across-the-board performance uplifts for every one of these new parts. Take the GTX 880M and GTX 870M as a prime example of how an existing architecture (in this case Kepler) can be optimized over the course of its life to deliver much better gaming potential as new improvements are rolled into it. Through the judicious application of higher clock speeds, these two cards are respectively 15% and 30% faster than their predecessors, the GTX 780M and GTX 770M.

Naturally, Maxwell’s rollout into the GTX 860M and GTX 850M is the real story here since the new architecture delivers vastly better framerates. We’re talking about (supposedly) up to 60% for the lower end GTX 850M which is a huge step forward for more affordable notebooks.


At this point you may be wondering where AMD is in all of this but we don’t have a conclusive answer. Their M290X is based off of two year old Pitcairn technology (which was also used in the HD 7970M and 8970M GPUs) while the Hawaii architecture hasn’t yet scaled down into the mobile market. Judging from their desktop renaming scheme for everything below the R9 290 series, it may be a while until we see a substantial notebook update from them. Plus, the latest generation of Kaveri APUs tends to hold their own quite well against mainstream discrete solutions so there’s no real rush for them to update an already-strong lineup.

One of NVIDIA’s main talking points for this round of graphics cards is how quickly technology has progressed in the last few years. We are now seeing the GTX 850M providing quite a bit more 1080P gaming potential as a GTX 580M, the top-dollar mobile GPU from three years ago. More importantly, the newest Maxwell architecture can achieve its numbers while fitting into the slim and light systems that on-the-go gamers are looking for.


With the GTX 800M series being NVIDIA’s premier mobile platform, they get the full suite of technologies. Some old friends like GPU Boost, PhysX, CUDA and GeForce experience have been given another chance to shine with compatibility running throughout the lineup. These are augmented by some new features like Battery Boost, GameStream and Shadownplay. The only exceptions to this are the Maxwell-based GTX 860M and GTX 850M which don’t receive SLI support since they’re meant to satisfy requirements in the thin and light segment.

Whether or not Maxwell will make a tangible difference in a market that’s been struggling is anyone’s guess but it does make noteworthy strides in areas that have been highlighted as deficient. Enthusiasts want battery longevity and smaller form factors for their portable gaming fix and, as evidenced by the success of high performance notebooks, they’re willing to pay a premium for that. However, the best parts of these mobile Maxwell parts lie "beyond the silicon" so-to-speak since they come with some envious features. We’ll get into that on the following pages which will give you a concise overview in preparation for our full review coming up in the next few weeks. But first, there are three more launches today that we need to discuss: the GT 840, GT 830 and GT 820.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
The GT 800M Series; Challenging Haswell & APUs

With AMD’s mobile Radeon lineup being slowly relegated to outlier products, NVIDIA is turning their attention to competition against Intel’s Haswell processors and to a lesser extent Kaveri. This targeting of integrated graphics isn’t well trodden territory for a discrete GPU shop but times have changed. Many of Intel’s latest generation chips are equipped with pretty decent onboard 3D processing capabilities considering where they were not too long ago. Kaveri is another matter altogether and isn't something that NVIDIA touched upon in its briefings but it's certainly a looking threat.

NVIDIA knows that volume sales lie alongside entry level and mainstream products, segments bereft of enthusiasts who want the best possible performance regardless of price. The challenge is to create a lineup of graphics processors which can convince the vast majority of the buying public that they’re better off with a discrete GPU.

Considering products like Iris Pro can pack a pretty good punch, NVIDIA will have their work cut out for them. Hence the introduction of two new GPUs and one rebrand: the GT 840M, GT 830M and GT 820M.


At this point in time, NVIDIA hasn’t revealed much about these lower-end cards so their specifications are still mostly unknown. What we do know is that the GT 840M and GT 830M both use a new Maxwell core code named GM108. The GT 820M meanwhile is a carry-over from the Fermi days. All of these GPUs feature 64-bit memory interfaces and up to 2GB of DDR3. Those aren’t mouth-watering specs but with Haswell gaming performance firmly in sight, more muscle would have simply increased costs without much benefit.

Since the GT series is aimed mostly at casual gamers, many of the GTX series’ more advanced features like Battery Boost, GameStream and ShadowPlay have been consigned to the dustbin. We doubt many in these GPUs’ intended audiences will care about the omissions but key items like Optimus, GPU Boost 2.0 and GeForce Experience compatibility have been left in place.


As you might expect, NVIDIA is able to capitalize on their architectural superiority against Haswell’s HD 4400 series and Iris IGPs in a big way. Not only do the GeForce cards throw more hardware at game rendering but they also benefit from a long history of driver development.

Simply put, drivers make or break a given graphics solution and Intel has historically struggled in this area. Optimizations for the newest titles, support for a game’s advanced features and quick bug fixes are all things that NVIDIA has been doing since their inception while Intel is now getting a crash course in the challenges associated with today’s latest titles. As a result, many of the most popular games are simply unplayable on Intel’s GPUs regardless of their claimed horsepower.

There is another reason for NVIDIA’s dominance here: GeForce Experience. On the desktop side GFE has tangible benefits through its ability to automatically optimize a game’s settings for your system’s configuration but on the notebook front, it is a game changer. Getting the most out of these lower end GT 800M-series parts can be a lesson in futility and many folks will likely give up on tweaking long before they reach playable framerates. Alternately, turning off all the eye candy settings typically results in atrocious image quality. GeForce Experience is a plug and play solution that doesn’t require much, if any, user input to achieve the best possible combination of image quality and performance. It goes without saying that capabilities like this will be appreciated by anyone who has struggled in the past with wringing playable framerates out of an entry level GPU.


The GT 840M and GT 830M represent a notable improvement over comparable GT 740M and GT 730M GPUs. As with the higher end GTX 800M cards, these two mainstream products are able to hit these levels while consuming less or as much power as their predecessors. That’s pretty impressive considering the GT 7xxM wasn’t exactly a slouch when it came to efficiency.

The most important aspect of NVIDIA’s latest generation is how they perform against Intel’s integrated solutions. In that respect, we’re looking at complete domination with the 830M and 840M offering up to FOUR TIMES faster performance in today’s latest games. With that being said, Crystal Well and its Iris Pro IGPs do point towards Intel taking gaming quite a bit more seriously.


While NVIDIA’s claimed framerate numbers are rather impressive, when they are paired up with the Maxwell architecture’s inherent efficiency, these new GPUs bring a whole new level to performance per watt metrics. For example, the GT 840M can pump out 30 frames per second in Skyrim while the system requires just 17W of power. Compare and contrast this to the unplayable numbers delivered by the Iris Pro while its system requires a whopping 43W to function.

Unfortunately, we can’t independently test any of these results but if they do come to pass, NVIDIA will likely make a good case for themselves in mainstream systems. The big question mark here is system cost. With notebook manufacturers looking to save money in every corner, if the GT 8xxM cards cost significantly more to implement than Iris or the higher end HD 4400 SKUs we may not see them achieve broad acceptance.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
The Mobile Maxwell Architecture & GM107

The Mobile Maxwell Architecture & GM107


Maxwell represents the next step in the evolution of NVIDIA’s Fermi architecture, a process which started with Kepler and now continues into this generation. This means many of the same “building blocks” are being used but they’ve gone though some major revisions in an effort to further reduce power consumption while also optimizing die area usage and boosting process efficiency. These aspects are particularly important for the mobile architectures found in the GTX 800M series parts.

At this point some of you may be thinking that by focusing solely on TDP numbers, NVIDIA is offering up performance as a sacrificial lamb. This couldn’t be further from the truth. By improving on-die efficiency and lowering power consumption and heat output, engineers are giving themselves more room to work with. Let’s take a mid range part like the GK106 found in the GTX 760M as an example of this metric. NVIDIA was constrained to the specific TDP and cooling capabilities of their partner’s notebooks, causing the GPU to operate below its true capabilities. As Maxwell comes into the fold, that ceiling won’t change but the amount of performance which can be squeezed out of the chips while still operating within the same constraints could improve exponentially. In addition, these aspects allow higher spec Maxwell chips to be utilized in thin and light applications, areas where significant performance sacrifices had to be made in the past.

Before we move on, some mention has to be made about the 28nm manufacturing process NVIDIA has used for the GM107 core because it’s an integral part of the larger Maxwell story. With the new core layout efficiency has taken a significant step forward, allowing NVIDIA to provide GK106-matching performance out of a smaller, less power hungry core.


Maxwell SMM (LEFT) / Kepler SMX (RIGHT)

Much like with Kepler and Fermi, the basic building block of all Maxwell cores is the Streaming Multiprocessor. The Maxwell SM (or SMM) still uses NVIDIA’s second generation PolyMorph Engine which includes various fixed function stages like a dedicated tessellator and parts of the vertex fetch pipeline alongside a shared instruction cache and 64K of shared memory. This is where the similarities end since the main processing stages of Maxwell’s SMs have undergone some drastic changes.

While Kepler’s SMXs each housed a single core logic block which consisted of a quartet of Warp Schedulers, eight Dispatch Units, a large 65,536 x 32-bit Register File, 16 Texture Units and 192 CUDA cores, Maxwell’s design breaks these up into smaller chunks for easier management and more streamlined data flows. While the number schedulers, Dispatch Units and Register File size remains the same, they’re separated into four distinct processing blocks, each containing 32 CUDA cores and a purpose-built Instruction Buffer for better routing. In addition, load / store units are now joined to just four cores rather than the six of Kepler, allowing each SMM to process 32 thread per clock despite its lower number of cores. This layout ensures the CUDA cores aren’t all fighting for the same resources, thus reducing computational latency.

There are still a number of shared resources here as well. For example, each pair of processing blocks has access to 12KB of L1 / Texture cache (for a total of 24KB per SMM) servicing 64 cores while there’s still a globally shared cache structure of 64KB. As with Kepler, this block is completely programmable and can be configured in one of three ways. It can either be laid out as 48 KB of shared memory with 16 KB of L1 cache, as 16 KB of Shared memory with 48 KB of L1 cache or in a 32/32 mode which balances out the configuration for situations where the core may be processing graphics in parallel with compute tasks. This L1 cache is supposed to help with access to the on-die L2 cache as well as streamlining functions like stack operations and global loads / stores.

Maxwell’s Streaming Multiprocessor’s design was created to diminish the SM’s physical size, allowing more units to be used on-die in a more power-conscious manner. This has been achieved by lowering the number of CUDA cores from Kepler’s 192 to 128 while Texture Unit allotment goes from 16 to just 8. However, due to the inherent processing efficiency within Maxwell, these shortcomings don’t amount to much since each individual CUDA core is now able to offer up to 35% more performance while the Texture Units have also received noteworthy improvements. In an apples to apples comparison, an SMM can deliver 90% of an SMX’s performance while taking up much less space.


With the changes outlined above, we can now get a better understanding of how NVIDIA utilized the SMM to create their new GM107 core. In this iteration, the GTX 860M and 850M receive the full allotment with five Streaming Multiprocessors for a total of 640 CUDA cores while the lower end parts have slightly cut-down versions. Compare and contrast this to a GK107 and you’ll immediately see the differences; where the GK107 made due with just two SMX blocks, the five included here bring about huge benefits with more PolyMorph Engines and better information routing, all while requiring significantly less power. The per-SM texture unit discrepancy between Maxwell and Kepler has been addressed by simply adding more “building blocks” to the equation.

Another interesting thing to note is die size in comparison to the previous generation. Even though a GM107 core has 1.86 billion transistors and takes up 148mm² of space, it actually consumes less power and has a lower TDP than a significantly smaller 1.3 billion transistor, 118mm² GK107 design.

The back-end functions have largely remained the same with a fully enabled GM107 core featuring 16 ROPs spread over two blocks of eight alongside a pair of 64-bit memory controllers. However, as with many other elements of Maxwell, there’s more here than what first meets the eye. NVIDIA has equipped this core with a massive 2048KB L2 cache, representing a nearly tenfold increase over GK107’s layout. This substantial increase in on-chip cache means there will be fewer requests to the DRAM, reducing power consumption and eliminating certain bandwidth bottlenecks when paired up with the other memory enhancements built into Maxwell.

The comparison numbers between GK107 and the new GM107 are quite telling: Maxwell offers 25% more peak texture performance and about 2.3 times more delivered shader performance than the previous generation. This allows NVIDIA’s entry level core to compete with GK106-class parts. Hence why the GTX 860M comes in two flavors for different market segments but performance between the Maxwell and Kepler cores is approximately the same.


Enhanced Video & Improved Power States


Over the last year, we’ve seen NVIDIA utilize Kepler’s onboard NVENC H.264 video encoder in a number of unique ways. At first, it facilitated game streaming services from a GeForce-equipped system to the SHIELD handheld gaming device brought PC gaming to either your HDTV or into your hands. Since all of the encoding was done within the GPU’s hardware, the amount of processing overhead for SHIELD was reduced, thus optimizing battery life and performance. We also experienced ShadowPlay, an innovative use of NVENC to record gameplay without the need for process-hogging applications like FRAPS or PlayClaw.

With Maxwell, these features make a comeback but with an enhanced NVENC block that’s equipped with a dedicated decoder cache. As a result of these changes encoding and decoding functions in Maxwell-based GPUs can double up on Kepler’s encode performance while also featuring higher levels of decode throughput.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Introducing Battery Boost & Optimus’ Return

Introducing Battery Boost & Optimus’ Return


The GT 800M and GTX 800M series have a boatload of unique features and some impressive specifications but in our eyes, Battery Boost has the most potential to change the way we approach notebook gaming. Historically, gamers had to choose between sacrificing battery life for enhanced performance or extending their time away from an outlet by artificially limiting framerates. There were options between those two extremes like the addition of massive battery packs or but they always seemed ham-fisted attempts to solve an unsolvable problem. This is where Battery Boost gets factored into the equation.


One of the inherent issues with optimizing performance on notebooks is the limitations imposed by games themselves. A game engine will always call upon the graphics core to feed it with the highest number of frames possible regardless of the situation, even if they really aren’t needed to maintain fluid gameplay. Not only does this put undue stress on system components but it also leads to lower battery life.

Through technologies like V-Sync this situation can be tamed but not eliminated altogether. There are other options too as demonstrated by AMD and NVIDIA instituting framerate caps, essentially putting a limiter on the GPU’s top end performance so it doesn’t consume excess power. In many ways Battery Boost builds upon those ideas and takes them to the next level with user adjustable settings and full-system tuning.


Balancing playable and unplayable conditions in games was never at the forefront of GPU designers’ minds when designing battery saving features for GPUs. They simply wanted to hit acceptable power consumption numbers when the system was running unplugged. Unfortunately, this situation still causes framerates to fluctuate quite a bit as the system goes through various power states in an effort to deliver adequate performance. What was missing was a technology that targeted playable conditions without wasting performance trying to hit extreme framerates that likely won’t be noticeable anyways.

Battery Boost is largely designed around a low level software algorithm which optimizes hardware components of not just the graphics processor but the whole system. NVIDIA isn’t saying exactly how it accomplishes this (there’s a lot of secret saucing going on behind the scenes) but there are obviously different performance and voltage states being applied in a manner that’s completely transparent to the end user.

The entire goal here is to fine tune system output so framerates hit a preset, user-adjustable target. If Battery Boost detects additional horsepower is needed, it will free up resources but, in that same vein, components can be put into lower power states if their full attention isn’t required. This is exciting stuff which can have a drastic impact upon battery life while also giving gamers the control they crave without sacrificing image quality.


NVIDIA claims that with Battery Boost enabled (it can also be disabled at your discretion) they are able to better harness the Maxwell architecture’s native frugality to deliver great battery life. In some cases there could be improvements of 20% or more over simple framerate targeting. Considering this is achieved while the system is still (hopefully) hitting 30FPS, we really hope this technology takes off and is included with all GTX 800M systems.

Without testing it, the only shortcoming we can see is the fact that Battery Boost needs to be enabled before a system transfers to battery power. If that doesn’t happen, standard power savings profiles will be instituted so planning ahead is essential.


NVIDIA’s GeForce Experience plays a pivotal part in this equation too; it acts as a traffic officer, presiding over Battery Boost and all its inherent features. With it, different parameters for Battery Boost’s various settings can be controlled. Even in-game image quality modifiers are included since GFE uses its built-in profiles to determine what’s necessary for the system to hit its target framerate.

If anything, it’s interesting to see how NVIDIA’s once-disparate technologies like GeForce Experience, Optimus, modifiable framerate targets and GeForce Boost are all now coming together to create a streamlined, feature-laden environment. Its small touches like these which may have a significant impact upon the success or failure of their future GPUs.


Speaking of the being able to modify settings, we can do just that through the Preference tab in GeForce Experience. This is completely optional since Battery Boost’s main claim to fame is its ability to deliver excellent gameplay and long battery life without a gamer’s direct input. However, if you want the system to strive towards higher framerates, simply move the Battery Boost slider to the preferred location, up to 50FPS. It’s that simple.


With all of these gaming-focused battery optimizations, NVIDIA hasn’t left out their most important feature: Optimus. In fact, Optimus can run right alongside Battery Boost so your system can deliver the desired framerates when it’s being used for gaming but enter a much lower power state when other tasks are being completed.

<object width="640" height="360"><param name="movie" value="//www.youtube.com/v/frC2SLldzZc?hl=en_GB&version=3&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="//www.youtube.com/v/frC2SLldzZc?hl=en_GB&version=3&rel=0" type="application/x-shockwave-flash" width="640" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

For those of you who don’t know what Optimus is, as explained in the video above, it is a technology which allows for the seamless switching between the discrete GPU and Intel’s IGP. When the notebook doesn’t need all the power brought to the table by a GeForce card, it hands things off to the integrated solution which provides better battery life. With Optimus and Battery Boost working in tandem, it will be certainly interesting to see what NVIDIA can accomplish in upcoming notebooks.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
ShadowPlay & GameStream Come to Notebooks

ShadowPlay & GameStream Come to Notebooks


ShadowPlay and Gamestream haven’t been talked about much in our previous desktop GPU reviews but they have gradually grown into impressive features. Both aren’t key to the success or failure of NVIDIA’s GeForce lineup but they represent some serious value-added muscle to mobile graphics cards and give the GTX 880M lineup additional street cred among PC gamers. Let’s kick this section off with a quick rundown about ShadowPlay.


With recoded and live gaming sessions becoming hugely popular on video streaming services, ShadowPlay aims to offer a way to seamlessly log your onscreen activities without the problems of current solutions. Applications like FRAPS which have long been used for in-game recording are inherently inefficient since they tend to require a huge amount of resources, bogging down performance during situations when you need it the most. In addition, their file formats aren’t all that space conscious with 1080P videos of over 10 minutes routinely eating up over a gigabyte of storage space.

By leveraging the compute capabilities of NVIDIA’s GTX 880M, ShadowPlay can automatically buffer up to 20 minutes of previous in-game footage. In many ways it acts like a PVR by recording in the background using a minimum of resources, ensuring a gamer will never notice a significant performance hit when it is enabled. There is also a Manual function which can start and stop recording with the press of a hotkey. All videos are encoded in real time using H.264 / MPEG4 compression, making for relatively compact files.

<object width="640" height="360"><param name="movie" value="//www.youtube.com/v/MAdv-CsOWvg?hl=en_GB&version=3&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="//www.youtube.com/v/MAdv-CsOWvg?hl=en_GB&version=3&rel=0" type="application/x-shockwave-flash" width="640" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

Since ShadowPlay’s recording and encoding is processed on the fly, it can be done asynchronously to the onscreen framerate so there won’t be any FRAPS-like situations where you’ll need to game at 30 or 60 FPS when recording. You can watch our full overview of ShadowPlay’s beta (shot a few months ago) in the video above.

NVIDIA has also included a Twitch.tv steaming feature which automatically live streams your gameplay to Twitch, one of today’s most popular game broadcasting services. This along with its standard in-game recording, has allowed ShadowPlay to quickly become a key tool for gamers who have supporting GPUs. Now that NVIDIA is bringing it to the mobile market, a whole new world is about to open up.

<object width="640" height="360"><param name="movie" value="//www.youtube.com/v/nLKj2j69K1c?hl=en_GB&version=3&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="//www.youtube.com/v/nLKj2j69K1c?hl=en_GB&version=3&rel=0" type="application/x-shockwave-flash" width="640" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

The GameStream ecosystem is, at face value, relatively straightforward. At the most basic level it consists of a GeForce-powered PC or in this case notebook and a secondary wireless device like NVIDIA’s excellent SHIELD or a Tegra-powered tablet. These once-disparate products are melded together over a high speed wireless network so the PC can do the heavy processing and effectively stream a game or application to a handheld device. This allows gamers to stream many PC titles (there are over 50 supported right now) right into the palm of their hand.

Now, notebooks packing NVIDIA’s new GPUs can be integrated with other GameStream components to create a truly mobile gaming setup. Once again, SHIELD’s console mode sits at its very heart.
 
Top