Definition

graphics processing unit (GPU)

A graphics processing unit (GPU) is a computer chip that renders graphics and images by performing rapid mathematical calculations. GPUs are used for both professional and personal computing. Traditionally, GPUs are responsible for the rendering of 2D and 3D images, animations and video -- even though, now, they have a wider use range.

In the early days of computing, the central processing unit (CPU) performed these calculations. As more graphics-intensive applications were developed, however, their demands put a strain on the CPU and decreased performance. GPUs were developed as a way to offload those tasks from CPUs and to improve the rendering of 3D graphics. GPUs work by using a method called parallel processing, where multiple processors handle separate parts of the same task.

GPUs are well known in PC (personal computer) gaming, allowing for smooth, high-quality graphics rendering. Developers also began using GPUs as a way to accelerate workloads in areas such as artificial intelligence (AI).

GPU use cases/What are GPUs used for today?

Today, graphics chips are being adapted to a wider variety of tasks than originally designed for, partially because modern GPUs are more programmable than they were in the past.

Some examples of GPU use cases include:

  • GPUs can accelerate the rendering of real-time 2D and 3D graphics applications.
  • Video editing and creation of video content has improved with GPUs. Video editors and graphic designers, for example, can use the parallel processing of a GPU to make the rendering of high-definition video and graphics faster.
  • Video game graphics have become more intensive computationally, so in order to keep up with display technologies -- like 4K and high refresh rates -- emphasis has been put on high-performing GPUs.
  • GPUs can accelerate machine learning. With the high-computational ability of a GPU, workloads such as image recognition can be improved.
  • GPUs can share the work of CPUs and train deep learning neural networks for AI applications. Each node in a neural network performs calculations as part of an analytical model. Programmers eventually realized that they could use the power of GPUs to increase the performance of models across a deep learning matrix -- taking advantage of far more parallelism than is possible with conventional CPUs. GPU vendors have taken note of this and now create GPUs for deep learning uses in particular.
  • GPUs have also been used to mine bitcoin and other cryptocurrencies like Ethereum.

How a GPU works

A GPU may be found integrated with a CPU on the same electronic circuit, on a graphics card or in the motherboard of a personal computer or server. GPUs and CPUs are fairly similar in construction. However, GPUs are specifically designed for performing more complex mathematical and geometric calculations. These calculations are necessary to render graphics. GPUs may contain more transistors than a CPU.

Graphics Processing Unit
Graphics Processing Unit

GPUs will use parallel processing, where multiple processors handle separate parts of the same task. A GPU will also have its own RAM (random access memory) to store data on the images it processes. Information about each pixel is stored, including its location on the display. A digital-to-analog converter (DAC) is connected to the RAM and will turn the image into an analog signal so the monitor can display it. Video RAM will typically operate at high speeds.

GPUs will come in two types: integrated and discrete. Integrated GPUs come embedded alongside the GPU, while discrete GPUs can be mounted on a separate circuit board.

For companies that require heavy computing power, or work with machine learning or 3D visualizations, having GPUs fixated in the cloud may be a good option. An example of this is Google's Cloud GPUs, which offer high-performance GPUs on Google Cloud. Hosting GPUs in the cloud will have the benefits of freeing up local resources, saving time, cost and scalability. Users can choose between a range of GPU types while gaining flexible performance based on their needs.

GPU vs. CPU

GPUs are fairly similar to CPU architectures. However, CPUs are used to respond to and process the basic instructions that drive a computer, while GPUs are designed specifically to quickly render high-resolution images and video. Essentially, CPUs are responsible for interpreting most of a computer's commands, while GPUs focus on graphics rendering.

In general, a GPU is designed for data-parallelism and applying the same instruction to multiple data-items (SIMD). A CPU is designed for task-parallelism and doing different operations.

Both are also differentiated by the number of cores. The core is essentially the processor within the processor. Most CPU cores are numbered between four and eight, though some have up to 32 cores. Each core can process its own tasks, or threads. Because some processors have multithreading capability -- in which the core is divided virtually, allowing a single core to process two threads -- the number of threads can be much higher than the number of cores. This can be useful in video editing and transcoding. CPUs can run two threads (independent instructions) per core (the independent processor unit). A GPU core can have four to 10 threads per core.

A GPU is able to render images more quickly than a CPU because of its parallel-processing architecture, which allows it to perform multiple calculations at the same time. A single CPU does not have this capability, although multicore processors can perform calculations in parallel by combining more than one CPU onto the same chip.

A CPU also has a higher clock speed, meaning it can perform an individual calculation faster than a GPU, so it is often better equipped to handle basic computing tasks.

GPU vs. graphics card: Similarities and differences

GPU and graphics card are two terms that are sometimes used interchangeably. However, there are some important distinctions between the two. The main difference is that the GPU is a specific unit within a graphics card. The GPU is what performs the actual image and graphics processing. A graphics card is what presents images to the display unit.

Top GPUs and graphics cards in the market

Nvidia, Advanced Micro Devices (AMD), Intel and Arm are some of the major players in the GPU market.

In 2020, some of the top GPUs and graphics cards have included:

  • GeForce RTX 3080
  • GeForce RTX 3090
  • GeForce RTX 3060 Ti
  • AMD Radeon RX 6800 XT
  • AMD Radeon RX 5600 XT

When looking to buy a graphics card, an individual should keep its price, overall value, performance, features, amount of video memory and availability in mind. Features consumers may care about include support for 4K, 60 fps (frames per second) or more, and ray tracing. Price will sometimes be a deciding factor, as some GPUs may be twice the cost for only 10%-15% more performance.

History of GPUs

Specialized chips for processing graphics have existed since the dawn of video games in the 1970s. Early on, graphics capabilities were included as part of a video card, a discrete dedicated circuit board, silicon chip and necessary cooling that provides 2D, 3D and sometimes even general-purpose graphics processing (GPGPU) calculations for a computer. Modern cards with integrated calculations for triangle setup, transformation and lighting features for 3D applications are typically called GPUs. Once rare, higher-end GPUs are now common and are sometimes integrated into CPUs themselves. Alternate terms include graphics carddisplay adaptervideo adaptervideo board and almost any combination of the words in these terms.

Graphics processing units came to high-performance enterprise computers in the late 1990s, and Nvidia introduced the first GPU for personal computers, the GeForce 256, in 1999.

Over time, the processing power of GPUs made the chips a popular choice for other resource-intensive tasks unrelated to graphics. Early applications included scientific calculations and modeling; by the mid-2010s, GPU computing also powered machine learning and AI software.

In 2012, Nvidia released a virtualized GPU, which offloads graphics processing power from the server CPU in a virtual desktop infrastructure (VDI). Graphics performance has traditionally been one of the most common complaints among users of virtual desktops and applications, and virtualized GPUs aim to address that problem.

Ray tracing and other recent trends

A few recent trends in GPU technology include:

  • As of 2019, GPU vendors typically provide GPU virtualization, and new and more powerful GPU chips are coming out on a regular basis.
  • In 2019, AMD introduced its full line of Radeon RX 5700 series GPUs. The series is based on AMD's Navi GPU architecture. Navi is seen as an upgrade to AMD's Graphics Core Next technology.
  • Arm targeted the mobile augmented reality (AR) and virtual reality (VR) market with its Mali-G77 processors.
  • Nvidia continued to push its ray tracing capabilities as part of its RTX platform. Ray tracing is seen as the next step in the evolution of graphics rendering after rasterization. While rasterization uses objects created from a mesh of triangles to represent a 3D model, ray tracing provides realistic lighting by simulating the physical behavior of light by tracing the path of light as pixels in an image plane and simulating the effects.
  • Enterprise-grade, data center GPUs are helping organizations harness parallel processing capabilities through hardware upgrades. This helps organizations accelerate workflows and graphics-intensive applications.
This was last updated in December 2020

Continue Reading About graphics processing unit (GPU)

Dig Deeper on Virtual and remote desktop strategies

SearchEnterpriseDesktop
SearchCloudComputing
SearchVMware
Close