Aug
21

The Rise of the GPU

The Rise of the GPU

August 21
By

I’ve spent a good portion of my career selling computers in one form or another. The sales process used to start with the CPU which makes sense because it’s the brain of the computer and determines the motherboard, RAM, expansion slots, and so forth. Most software was CPU dependent. For many years, the clock speed race trumped almost every other metric. This is because programs were single threaded and didn’t benefit from a higher physical core count. Only recently have we seen programs written to take advantage of multi-core CPUs.

And while the CPU is still a critical component to every computer built, the GPU (Graphic Processing Unit) is garnering a lot of attention lately. New GPU announcements, such as the recent GeForce GTX 1080 cards, are heralded by every tech publication and blog. Meanwhile, Intel seems to announce new CPUs without much fanfare anymore.

video card GPU

A Graphic Processing Unit is the chip that sits on a graphic card, but is not synonymous to the card.

Does it Make Sense to Overclock Workstations or Servers?

The GPU has always been an important choice for gamers who push GPUs to their limits rendering 3D worlds with many textures, colors and details. Even today it’s not uncommon for a gamer to spend three to four times as much money on a GPU than a CPU. The CPU is no longer the factor that determine who well that game will play. But gaming is only one segment of the market that’s experienced a surge in GPU adoption. This week I’d like to look at a number of factors which are driving interest in GPUs from NVIDIA and AMD.

CPU vs. GPU

Before we dive further, it’s not a bad idea to take a look at some of the main differences between today’s CPU and GPUs. I’ve heard a lot of analogies comparing and contrasting the unique differences between the two. Here are a few of my favorites:

  • The CPU is the brain of the computer, and the GPU is the soul.
  • The CPU can do many tasks moderately well while the GPU does one or two tasks incredibly well.
  • The CPU is for general purpose tasks while the GPU is for specialized tasks.
  • CPUs are latency oriented while GPUs are throughput oriented.
  • The CPU is like a few oxen pulling a cart while the GPU is 10,000 chickens pulling the same cart.

OK, so that last one when off the rails a bit, but you get the main idea. In short, the CPU is the jack of all trades while the GPU is optimized to perform a few tasks, over and over, at a very high rate of speed. CPUs are composed of a few cores (1 to 20) with lots of memory which can handle a few software threads at a time. GPUs can have hundreds of cores that handle thousands of threads simultaneously. These differences are important to keep in mind as we consider changes to the way we use computing power.

It can get confusing at times, because modern CPUs can contain low-power, integrated graphic chips. These are fine if all you’re doing is sending emails or using Microsoft Office applications. But any program that renders 3D models or environments is going to perform best with a dedicated GPU.

NVIDIA created an entertaining comparison of how a CPU and GPU might paint the Mona Lisa.

4 Areas Where Multi-core Processing Really Matters

GPU Acceleration

GPU-accelerated computing allows applications to offload the most intensive portions of the application to the GPU while the rest of code is handled by the CPU. That’s just a fancy way of saying certain applications run significantly faster. NVIDIA is the grandfather of GPU acceleration and optimizes their line of Quadro and Tesla cards to work with specific scientific, engineering, and enterprise applications. NVIDIA maintains a database of the applications that have been certified to work with their cards. It’s wise to check to make sure your application is supported before investing in a Quadro or Tesla.

Maya video games screen
Software like Autodesk Maya supports GPU-acceleration

Adobe and Autodesk make some of the most popular GPU-accelerated applications on the market. If your employees are running a product like Adobe Premiere Pro or Autodesk 3DS Max or Maya, you should check to make sure your workstations take advantage of this technology. I’ve seen some applications that run up to 10x as fast as those without GPU-acceleration enabled. Keep in mind that you’ll need to enable it in each program that’s supported as there’s no global setting in Windows.

CUDA vs. OpenCL

Not that we need another layer of technology, but now that we understand what GPU acceleration does, it’s helpful to understand how NVIDIA and AMD implemented this feature into their products. NVIDIA’s version is called CUDA which is a proprietary framework found only on NVIDIA cards. You might read that a certain application includes CUDA support, and in that case, you’ll want to pair that application with an NVIDIA GPU. AMD cards support OpenCL, which is an open source framework. If you know your application supports OpenCL, you’ll be better off pairing it with a card from AMD.

OpenCL logo

I should note that NVIDIA cards also support OpenCL applications. But they aren’t as efficient as AMD cards until you get into the higher end GPUs such as the GTX 980. Some applications will support both CUDA and OpenCL, but that’s unusual. Most of the time you’ll have to choose between one or the other. My experience selling both cards leads me to recommend an NVIDIA card under most circumstances as I’ve found them to be the most reliable. But there are applications that simply run better on AMD. The popular Final Cut Pro is one of those applications. Your application is generally going to determine which GPU you select, not the other way around.

Because OpenCL is open source, it’s easier to adopt and integrate into applications. That results in a much larger list of OpenCL optimized applications when compared to CUDA. Where NVIDIA shines is the care and polish they require from their partners who implement CUDA. Which one is better? Both will provide a substantial performance increase if optimized correctly. OpenCL is supported by a larger number of applications while CUDA has been embraced by the scientific and HPC communities.

Why the Rise?

A number of factors have played into the rise of the GPU. While consumers were flocking to their smartphones, tablets and ultrabooks, gamers were pushing their graphic cards to the max. NVIDIA responded by offering a feature called SLI that allowed gamers to combine two or more GPUs for the ultimate in gaming experience. AMD offered a similar technology called CrossFire for their line of cards. The CPU was no longer viewed as the bottleneck for gaming.

Intel hasn’t helped the perception by offering new CPUs with measly 10% performance gain. Meanwhile, GPUs were nearly doubling in performance each year.  I’ve been running the same Intel i5 2500k CPU in my desktop for nearly four years. In that same time, I’ve upgraded my GPU four times, or once each year. My CPU will handle about any game I throw at it, but the latest games require the latest GPUs.

And it’s not just gamers that are clamoring for faster GPUs.

Google car

Autonomous cars must be able to process massive amounts of data on the fly

Drones and Autonomous Cars Require Powerful GPUs

The rise of scientific, engineering and HPC are all driving adoption of GPU platforms. These applications are optimized for the “do a few tasks very fast” nature of the GPU. So we’re seeing GPUs analyze mountains of autonomous auto data from Google and Tesla. GPUs are being used to model weather from remote locations around the globe as well as analyze DNA sequencing in medical labs. More applications are being created and optimized for the strengths inherent to the GPU. And both AMD and NVIDIA have aggressively marketed their products to those running applications that can accelerate beyond what the CPU can do. For many people, the CPU has reached the point of “good enough.”

What does the future hold for the GPU?

We’re only at the beginning of machine learning. Technologies such as sensors, drones and smart appliances create data for processors to analyze. With VR and AR moving into the mainstream over the next few years, the ability to process 3D environments on the fly will fall to the GPU. Today’s virtual reality headsets like the Oculus Rift and HTC Vive require high-end GPUs. They need to create environments and provide frame rates that won’t make users dizzy.

Drones and autonomous cars, along with advanced scientific and engineering applications, will continue to fuel our desire for more powerful GPU processing.