What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

The GPU Technology Conference: NVIDIA's New Focus in a Changing Market

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
GTC-10.jpg


The GPU Technology Conference: NVIDIA's New Focus in a Changing Market






Last year, NVIDIA held NVISION for a large cross-section of the industry. Despite the obvious success of that show, this year they tried something different with some out-of-the-box thinking by hosting their first annual GPU Technology Conference (GTC).

Even though you and I are most familiar with the GPU being used predominantly for graphics processing, NVIDIA has been marketing what was a predominantly graphics-pushing architecture for massively parallel computing. Discrete low-power GPUs have also been making their way into other markets such as cell phones, media players, netbooks and even GPS devices. Every aspect of programming for and harnessing the potential of these emerging markets is a challenge considering just a few years ago, many of them didn’t exist. The lofty goal of this conference was to bring the development community together to discuss the present-day advances and future possibilities that can be achieved when programming for the GPU.

The actual conference itself was broken into a number of separate summits that each focused on a different aspect of the industry. The Emerging Companies Summit showcased some 60 new companies who have been harnessing the power of the GPU to deliver innovative solutions in a number of areas. Meanwhile, the GPU Developer’s Summit concentrated on presentations discussing techniques and solutions for harnessing the full potential of the GPU in parallel computing situations. Last but not least was the NVIDIA Research Summit that would be a primer for industry professionals who wanted to learn how they could use GPUs to drastically increase their productivity while possibly decreasing costs.

Over the course of three days, companies from around the world would come together to participate in over 250 sessions, keynotes and tutorials. Hardware Canucks was on hand to cover it all but considering there was over 500 hours of information being presented, it was impossible to catch it all. As such, will endeavor to give you as complete of a cross-section as possible regarding what went on and take a quick look at a number of new technologies and upcoming programs which were presented.

To many of you reading this article, the focus of this conference may sound a bit dry and may lack that whisper of excitement that you have come to expect from anything GPU related but let me tell you, interesting things are happening in the GPU computing industry. As you progress through the article, I hope that you too will begin to share some of my newfound excitement for the possibilities that GPUs bring to the table.

GTC-13.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
The CUDA Architectural Ecosystem

The CUDA Architectural Ecosystem


Before we even get started with this article, I think it is necessary to give everyone reading a quick crash course about Nvidia’s CUDA and why it is considered by many to be an essential part of their programming toolbox. CUDA also just so happened to be the focus of the GTC.

GTC-1.jpg

In order to make the GPU into a processing powerhouse, NVIDIA introduced CUDA or Comupte Unified Driver Architecture. It was first released to the public less than three years ago and allows developers to harness the potential number crunching power of NVIDIA GPUs through the use of the C programming language (with NVIDIA extensions). In the grand scheme of implementing a new technology, CUDA’s current upswing popularity is nothing short of incredible given how long it has been available.

However, NVIDIA will be the first to remind anyone that their ultimate goal is not to replace CPU processing with CUDA-enabled GPUs. Rather, NVIDIA’s goal is to give developers the ability to run parallel workloads containing large data sets on the GPU while leaving the CPU to crunch through slightly more mundane instruction sets.

Here is how NVIDIA describes it:

NVIDIA® CUDA™ is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. It includes the CUDA Instruction Set Architecture (ISA) and the parallel compute engine in the GPU.

GTC-9.jpg

One of the main focuses of the GTC as well as the keynotes held therein was the use of the GPU not as a CPU replacement but as a tool that should be working in parallel with the CPU. Above we can see a chart that illustrates exactly why NVIDIA thinks the co-processing ecosystem with a combination of CPUs and GPUs will benefit the industry.

When it comes to serial code, a run-of the mill CPU is very efficient but falls flat on its face when asked to run parallel instructions. Meanwhile, a GPU eats through parallel code like no-one’s business but can’t efficiently run serial code. It all comes down to using the right tool for the job and with a combination of CPU and GPUs, a company is well set to handle any processing-intensive tasks they might have.

GTC-5.jpg

Perhaps unknown to many people is the fact that NVIDIA considers CUDA as an ecosystem that encompasses both open (OpenCL, OpenMP, etc.) and closed languages, libraries and compilers. Since it supports many of the industry’s most popular coding languages, support has been growing quickly with dozens of universities now teaching CUDA and even the large OEMs of the industry are beginning to jump on board as well.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Debuggers for All

Debuggers for All


GTC-2.jpg

Imagine this: NVIDIA’s CEO Jen-Hsun Huang is making his keynote address at the GPU Technology Conference in front of a packed room containing some of the world’s greatest minds and everyone is silent. Then out of the blue, the crowd of intellectuals erupts in an impromptu cheer of both relief and happiness when mention is made of a debugger called Nexus that will be available for NVIDIA GPUs. You would have thought Jen-Hsun had just offered free Ferraris to everyone in attendance.

I will admit that even I was shocked by the enthusiasm shown for Nexus but the more I learned about this program, the more I was able to understand where the crowd’s applause came from. You see, to a programmer, having a debugger is like holding the Holy Grail.

GTC-3.jpg

At its most basic level, a debugger allows a programmer to simulate and pinpoint the source of any issues within the code of a program or which piece of hardware caused a crash. Nexus can be run through the industry standard Microsoft Visual Studio and includes debugging profiles for the GPU memory and allows for checking the kernels, shaders and other aspects necessary for GPU computing. The GUI also helps developers better visualize and track thousands of concurrent threads which are present when using a GPU for computational activities.

Below we have a quick video showing Nexus in action.

<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/FLQuqXhlx40&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/FLQuqXhlx40&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>​

So what does this mean for developers? First of all the use of MS Visual Studio allows Nexus to be understandable to most of the programmers and developers out there without the need of additional resources being devoted to training. It also means that there will be less trial and error when it comes to finding issues with current and upcoming programs that use CUDA. This will allow more developers to gravitate towards NVIDIA’s architecture since doing so now means they have access to an efficient way to debug code.

Unfortunately, at this time Nexus isn’t available for DirectCompute 11, OpenCL or D3D11 but we have been assured that NVIDIA is hard at work making sure the next version of Nexus will incorporate these additions. The tentative release date for DirectCompute and DX11 support is early Q1 2010 and OpenCL-C debugging will come later in the year.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
NVIDIA’s New Push: DX11 & DirectCompute

NVIDIA’s New Push: DX11 & DirectCompute


Despite what many may have heard courtesy of some half-baked PR agency’s knee-jerk email, at the GTC NVIDIA was showing their support for the upcoming DX11 API in a big way. Even though they don’t have an architecture that supports DirectX 11 at this point, there were quite a few sessions dedicated to this new API along with DirectX Compute. Many of the sessions were primers for developers who are in the process of looking into using these new tools to advance their game development process.

GTC-11.jpg

One of the main focuses of Microsoft’s DirectX 11 is to streamline the development process while enhancing overall efficiency. In layman’s terms this means easier, faster game development using advanced graphical features that won’t take up additional resources on the GPU. Hopefully, new mentalities such as this will rub off and we will see DX11 take off quicker than DX10 / 10.1 ever did as those past API’s never really got off the ground.

Another benefit of writing for the DX11 API is the fact that unlike DX10, there is no reason to write a whole new code path just for DX9 compliance. According to many developers we have talked to, it was the necessity of writing separate paths for older DX9 hardware that stymied their enthusiasm for DX10. All a developer has to do is specify the features they want to run given a set of hardware circumstances and they are off to the races with support for current and past APIs.

GTC-12.jpg

The efficiency of DX11 comes through a number of advances which allow programmers flexibility without the need to dedicate more computational resources for models or scenes with higher detail levels. In just one example, NVIDIA demonstrated how a simple move away from a regular triangle mesh towards a more compact DX11 compression method can virtually eliminate memory bottlenecks. This will allow developers to create more detailed environments without having to worry about GPU texture memory restrictions.

With support for hardware tessellation, two new completely programmable shading stages (Hull Shader and Domain Shader) and a whole toolbox of other feautres that will help developers realize their goals, we are hoping DirectX 11 will catch on quickly. Contrary to some rumors, NVIDIA seems to be taking DX11 very seriously indeed.


GTC-26.jpg

The availability of DX11 will also usher in the age of DirectCompute11 which will allow even more calculations to be shunted towards the GPU. Through the use of programmable compute shaders, DirectCompute 11 is able to accelerate everything from enemy AI to the movement of objects through a specific scene. Additional image post-processing techniques like blur, soft shadows and depth of field can be made more efficient through the use of Compute Shader Model 5.0 as well.

NVIDIA’s team is currently heading up additional initiatives with developers to make sure they are able to make the most out of these DX11 features. From what we can tell, the DX11 generation should bring about more lifelike visuals than ever before while making high detail levels much more accessible to people with lower-spec’d cards.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
OpenCL and Physics; A Market Cornered

OpenCL and Physics; A Market Cornered


With Windows 7 just about to ship to consumers worldwide, many of us will soon be talking about the possibilities OpenCL (Open Compute Language) brings to the table. In the most simplified terms, OpenCL is an open source API which allows developers to take harness the power of modern GPUs in order to accelerate certain tasks. It isn’t limited to Microsoft operating systems either since Apple has also made OpenCL an integral part of their news OSX Snow Leopard platform.

GTC-4.jpg

The true goal of OpenCL is to allow for workloads to be spread across heterogeneous platforms which consist of GPUs and CPUs (along with other processors) and prioritize the workflow towards the hardware which is best able to handle the processing. As such, serial and task workloads with be sent to the CPU while larger data parallel workloads will be the graphics cards’ domain. The developers we spoke to are extremely excited about OpenCL but many admitted that it may take them a while to come to grips with directing tasks to the necessary hardware within a heterogeneous environment. These same developers also made it apparent that no matter what the challenges may be with OpenCL, they would rather support open standards rather than closed ones. As for NVIDIA, OpenCL is a programming language which is supported within the CUDA architecture.

<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/DcTRjsliNAo&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/DcTRjsliNAo&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>​

OpenCL may at first sound like it will have very little to do with the way games are presented but that couldn’t be further from the truth. It will play a huge part in allowing GPUs to process many of the most demanding physics calculations while allowing the CPU to continue on with other tasks. NVIDIA is actually leading the field here with publicly available OpenCL drivers, support through the CUDA architecture as well as SDK implementation.

When it comes to the public’s opinion, NVIDIA’s take on physics processing has taken some black eyes as of late with its PhysX API being relegated to the back rooms of the development community. Even though the list of PC games with support is minimal, there have been some notable exceptions with the release of Batman: Arkum Asylum foremost among them.

However, the real story here is not about a proprietary API like PhysX but rather what OpenCL can bring to NVIDIA’s table when it comes to supporting physics calculations. NVIDIA is now in an enviable position since with their firm support behind OpenCL, they are now able to boast compatibility for every GPU-accelerated physics API on the market. Bullet, Havok Cloth, the DMM Engine and many others will soon be seeing their debut across multiple platforms which include ATI AND NVIDIA GPUs. However, with their additional support for PhysX, NVIDIA GPUs seem to have a leg up on the competition when consumers are looking for a product which supports as many applications as possible. Sure, OpenCL could further marginalize PhysX but if it is does, NVIDIA is still in a prime position to support any and all OpenCL physics APIs.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
The Next Gen Fermi Architecture

The Next Gen Fermi Architecture


GTC-20.jpg

By now I am sure many of you reading this already know that the GPU Technology Conference was used as the venue to announce NVIDIA’s next generation architecture code-named Fermi. Since it was announced at a conference which was nearly entirely focused on GPU computing, there was very little information for all the gamers who are eagerly awaiting NVIDIA’s answer to ATI’s 5000-series. However, there were some tantalizing clues about what we can expect from the GeForce iteration of Fermi and in this section we will endeavor to put as much of the architectural technical jargon into words you can all understand.

GTC-19.jpg

Taking a look at the core layout of the flagship Fermi GPU, we can see that there are quite a few similarities when compared to the G80 and G200 series with the 512 Shader Processors (now called CUDA Processors) shown in green taking up the majority of die space. On the outer fringes of the die are a total of six 64-bit memory partitions which can be addressed separately for a combined 384-bit GDDR5 memory interface. The other items on the die’s periphery are the GigaThread unit (we will talk more about this later) as well as the Host Interface. The shared L2 Cache is also a feature unique to this GPU but like the GigaThread unit, we will be covering this a bit later.

As you may have expected, all of these components mashed together make the Fermi architecture one of the most complex to date and also one of the largest with 3 billion transistors.

GTC-25.jpg
GTC-23.jpg


The CUDA Processors are grouped into units of 32 in order to form the basis of a Streaming Multiprocessor and there are 16 of these SMs per Fermi core. Meanwhile, each of the CUDA cores features a single ALU (arithmetic logic unit) and FPU (floating point unit) much like previous generations. However, where the Fermi architecture breaks from the mold is with the addition of the new IEEE 754-2008 floating point standard that adds instruction sets for both single and double precision double precision arithmetic. Double precision arithmetic is mostly used in HPC (high performance computing) scenarios such as quantum chemistry and will have little effect on this card’s performance in most people’s home systems.

GTC-30.jpg

In past NVIDIA GPUs, each of the SMs had access to a mere 16KB of shared memory but with Fermi each SM has 64KB of on-chip memory. In addition, that 64KB can be configured between memory or L1 cache in a number of ways depending on the needs of an application.

Speaking of cache, Fermi is the first NVIDIA GPU with access to both L1 and a shared L2 cached memory hierarchy. This will allow improved bandwidth as well as fast data sharing between the SMs with the help of the shared L2 cache.

GTC-21.jpg

Finally, in our quick run-through of this new architecture we come to the GigaThread hardware thread scheduler which allows for concurrent kernel execution. In a serial kernel execution scenario, each kernel has to wait for the one before it to finish before it can begin. However with concurrent kernels, the whole GPU can be utilized since different kernels of the same application context can operate on Fermi at the same time. This not only frees up resources ubt it allows for quicker processing of things like in-game physics or AI.

While this section only scratched the surface of what Fermi has to offer, we intend to offer you a more-in-depth look as the launch gets closer. Until then, we have this small comparison diagram for you:

GTC-24.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
A Focus On: TechniScan Medical Systems

A Focus On: TechniScan Medical Systems


GTC-14.jpg

Website: Techniscan | Medical Systems


One of the more interesting sessions I attended was held by a start-up company called TechniScan Medical Systems. If you were fortunate enough to watch the opening keynote delivered by NVIDIA’s CEO, then you will already be familiar with the technological advancements TechniScan is brining to the medical imaging field. Their goals struck close to my own heart and I am sure they will do the same for you.

With one in every eight women in this world developing breast cancer some time in their lives, the number of cases has been on a steady rise but techniques used for early detection have been lagging behind. The usual methods of mammography are anything but accurate or pleasant to undergo and the subsequent biopsies usually return benign results. What the industry needs is an accurate, quick imaging system which can filter out benign cysts and false positives. This in theory should free up doctors’ and the equipment’s time to concentrate on priority cases.

GTC-8.jpg

TechniScan has come up with a system that uses a non-invasive way of scanning the whole breast for any signs of cancer. A person lies down face-first on the examination table while their breast is suspended in warm water and an ultrasonic scanner is then rotated around the target area. This technique results in thousands of cross sections that need to be compiled into a 3D image and this is of course where CUDA and GPU computing come into play.

GTC-7.jpg

One of the main topics of conversation at the GTC was the power of near-instant visualization that GPUs help provide in fields which range from seismic research to medical imaging. In this case we are talking of a full 3D representation of the breast which means that a large amount of processing power is needed. Instead of using a huge rendering farm to process the multitude of cross-sectional images that the scanner provides, TechniScan can do the same amount of work in a fraction of the time with only a few Tesla engines. Their scanning techniques and the high-resolution rendering capability of modern GPUs allows TechniScan’s ultrasounds to provide pinpoint accuracy. Accuracy means quicker detection which could in fact save quite a few lives.

With FDA approval expected in the near future, TechniScan looks to be well on their way to providing a real leap forward for the medical industry when it comes to detection of one of the most prevalent forms of cancer. We are wishing them all the best.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
A Focus On : Mental Images iRay

A Focus on : Mental Images iRay


GTC-15.jpg


Website: Mental Images: Home


In the professional field of architectural rendering, quality presentations and selling a project to a customer is everything. The problem is that photo-real rendering comes at a heavy price in terms of the resources and the computational power that is needed. In some cases, a single ray traced frame can take hours or even days for a traditional rendering farm to complete.

Now, imagine you are the lead architect presenting daytime renderings of an office space on your laptop and out of the blue the client asks to see what their new offices would look like in different lighting conditions. Before iRay, it would have been literally impossible for additional renderings to be completed in a timely fashion.

GTC-16.jpg

What iRay does is leverage GPUs into the rendering process and with a predetermined set of instructions, allows for ray traced scenes within existing scenes in record time. In order to do this, Mental Images used highly optimized BSDF and EDF shading frameworks to accurately represent global illumination effects across numerous surfaces, finishes and textures. This is all accelerated through the CUDA architecture which is able to handle these parallel data sets with efficiency that can’t be matched even by today’s most advanced processors.

GTC-17.jpg

The result is a one-touch rendering engine that can simulate everything from daylight to spot and point lights and their effect on a scene in a matter of seconds rather than hours. Mental Images also makes it possible to stream rendering requests and results over the internet. This means someone who is travelling to a client can actually render a scene remotely and immediately generate results to a laptop or even a PDA. All in all, this could usher in a new age for on-the-go professionals who need quick changes to their architectural or design presentations.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
A Focus On: Cooliris and VisioGlobe

Cooliris


GTC-18.jpg


Website: Cooliris | Discover More


Cooliris has been around for more than three years now, but they are still considered an emerging company and as such were pushing their new web browsing plug-in at the GTC. The goal of this plug-in is to provide a visual search tool which can replace the usual list view sites like Google, Amazon, Craigslist, Hulu or any other search engine have with fully interactive 3D slideshows. It installs right over the top of many existing browsers which makes the transition seamless for most users.

<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/QXL4-a6ySg4&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/QXL4-a6ySg4&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>​

Unfortunately, until recently graphics processing power required to display this type of interface wasn’t available to most consumers but things have rapidly changed. With even mobile phones packing some serious graphics processing horsepower, Cooliris has been able to launch their downloadable across platforms such as the iPhone and it will also support upcoming Tegra-based devices as well.

I have personally been using Cooliris since being introduced to it about six months ago and let me tell you, it is almost impossible to go back to standard searches.


Visioglobe


GTC-33.jpg


Website: Visioglobe.com


In order to put what Visioglobe does into perspective, just imagine your run-of-the-mill GPS device on roofies and Red Bull and you’ll have an idea of where this company sees thing heading. They figure we live in a three dimensional world and it’s time on-screen navigation took the next logical step in visualization by stepping forward with true street level views.

<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/lAiOpqPv26c&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/lAiOpqPv26c&hl=en&fs=1&rel=0&color1=0x3a3a3a&color2=0x999999" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>​

Part of the demo we were shown is above and even though it was shot with a digital camera, it is hard to mistake why people seem drawn to Visioglobe’s approach. With the ability to link in to social networking sites and display people’s positions in real-time, it definitely has potential outside of mundane “point A to point B” directions.

GTC-32.jpg

Nvidia’s Tegra chip was powering the palm-sized device, the graphics themselves aren’t going to wow you with their fidelity but that isn’t the point of this early beta version of the program. Rather, what we were seeing was a demonstration of where GPS devices are headed in the near future and how the Terga can be used to move certain fields into higher levels of visualization.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
A Resounding Success or an Uncertain Future?

A Resounding Success or an Uncertain Future?



Before I start putting my thoughts down on paper about NVIDIA’s GPU Technology Conference and the direction the industry seems to be heading, allow me share with you a quote:

“…we expect things like CUDA and CTN will end up in the same interesting footnotes in the history of computing annals – they had great promise and there were a few applications that were able to take advantage of them, but generally an evolutionary compatible computing model, such as we’re proposing with Larrabee, we expect will be the right answer long term.”

Intel’s Pat Gelsinger to CustomPC
July 2nd, 2008


Ironically, Pat isn’t working for Intel anymore and Larrabee seems to be stuck at the glorified tech demo stage but CUDA continues to march on and is gaining popularity at an almost alarming pace. We have seen applications which utilize the GPU for parallel computing tasks emerging from every corner of what used to be an industry reserved for exorbitantly-priced and power hungry CPU clusters. With more and more companies jumping on board, GPU computing in general seems to be exactly what many industries were looking for. Meanwhile, the statements of this burgeoning popularity and near-limitless potential weren’t made by NVIDIA but rather representatives we spoke to from like likes of Industrial Light and Magic / Lucasfilm, AutoDesk and other industry luminaries. That in itself says a lot about how CUDA has been received. If anything, the GPU Technology Conference proved that no matter what the naysayers have said, it is more than obvious that CUDA is real, it’s here and it isn’t going anywhere but up in popularity.

<div style="float:right;margin:6px;">
GTC-31.jpg
</div>One of the most interesting aspects about the GTC is that NVIDIA took a back seat and other than holding a few information sessions for DirectCompute, DX11, etc., they let the attendees take the wheel and decide where to guide the sessions. This worked out extremely well for them since it portrayed GPU computing as an inclusive medium for programmers that is easily accessible for professionals and regular consumers alike. Honestly, before going I had convinced myself that the GTC was doomed to failure and I don’t even think NVIDIA was prepared for the response they received. They expected around 800 people to attend not including about 100 attendees from the press but what they got was 1620 professionals representing a massive array of companies and government organizations.

While it is amazing to see how far the GPGPU market has come in a mere three years, there are plenty of other technologies NVIDIA has been pushing. With Intel’s Atom making some serious waves in the mobile computing and small form factor markets, NVIDIA’s ION and forthcoming ION2 are poised to ride the Atom’s coattails to serious success. People are looking for efficiency, performance and miniaturization within a PC that they can use on a daily basis and with the Atom / ION combination, they can get exactly that. The Tegra mobile chip is also making some inroads in nearly every area you can think of. Not only will it inspire innovative automotive solutions but we will soon see it in media players (the Zune HD is the first of many), GPS units and a slew of other devices.

As the focus of the industry moves towards Fermi, there are still many questions to be answered particularly about any GeForce iterations thereof. What NVIDIA needs is a price-competitive solution that is able to outperform ATI DX11 cards while offering the most in terms of value. If the benchmarks prove that Fermi is superior to the HD 5800-series, then NVIDIA will once again be on firm footing from a gamers point of view.

The majority of people reading Hardware Canucks right now are gamers and enthusiasts who use their graphics cards for playing games. Unfortunately, we sometimes have a narrow view of the GPU world that centers around review graphs and sky-high framerates but nowadays, there is more to the industry than that. We have to remember that even though NVIDIA is thinking outside of its typical gaming markets, the technologies it is developing within CUDA can and will have positive implications for the gaming community as well. In NVIDIA’s world, the role of the GPU hasn’t changed. Rather, it has evolved into a tool that can be used to not only show worlds of fantasy but also help unravel medical mysteries or be used in a studio to visualize a movie's special effects.

The fact of the matter remains that last year NVIDIA lost a small fortune, their stock value plummeted and their core markets are under constant attack from competitors. They need to turn things around in record time and while the absolute success of the GTC provides a ray of hope for the future, convincing developers to use NVIDIA’s platforms is just the first skirmish in what looks to be a battle for survival. To be honest though, if the technologies and programs associated with CUDA help to save even a single life – be it through quicker, more effective mammograms or a 3D image of the heart or predicting a levee break- then in my opinion, all the naysayers can eat their words. Why? Because NVIDIA will have proven that their formula for a GPGPU ecosystem really does work and can be used to help humanity as a whole instead of having GPUs relegated to just rendering pretty games.

GTC-6.jpg
 
Last edited by a moderator:
Status
Not open for further replies.
Top