To play video games, you need a way for data to be displayed on screens. This requires different types of processing systems, and dedicated units used to achieve this. These are called graphical processing units, or GPUs. GPUs are a must in the modern gaming landscape, and they’re why games today can look so good.
But, where did GPUs come from, and why are they indispensable in the gaming experiences today?
The simple answer to what a GPU is that it's simply a graphical processing unit. This is a special component of a computing system dedicated to accelerating image processing. While the name GPU technically refers to a specific chip, it's also commonly used interchangeably with the entire graphics card. For the sake of accuracy, when asking what is a GPU in this article, consider it the specific chip for this article.
Graphical processing has always existed in computing thanks to a need for displays, but special processing units have become increasingly necessary thanks to greater demands on computer display systems.
When asking what is a GPU capable of that makes it necessary, we have technologies like ray tracing, DLSS, AI integrations, shadow calculations, modifying textures, and way more. Basically, everything with 3D that you see on your screen, as well as all the 2D work, is within the realm of what a GPU handles.
Dedicated graphical processing units can come as parts of complete graphics cards, as we commonly see in the PC gaming space. They can also come as chips within locked systems, as we see in gaming consoles or mobile phones. This means cross platform games are also more seamless today.
The first GPUs originated in arcade video games in the 1970s, which offered specific hardware for their early limited games. What is a GPU this early capable of? These translated pixels from the main board onto the arcade machine displays, which were useful but hardly what we think of when we consider advanced GPUs.
Home systems first used a device's central processing unit, or CPU, to manage graphics output. This would eventually prove a limiting approach, as CPUs couldn't offer the feature set necessary to output graphics efficiently.
Consoles like the Nintendo 64 and PlayStation would adopt special hardware to add graphical processing 3D features, opening the doors to a new generation of visuals. In the PC space, it was the GeForce 256 that popularized GPUs, improving performance over software-driven rendering by leaps and bounds. To look at what a GPU is today, it has become a powerful tool that doesn't just enable bare functionality, it vastly improves existing potential.
Each generation of AAA console and PC gaming pushes the envelope of graphical fidelity and performance. As developers promise bigger and more detailed worlds, hardware developers need to keep pace with improving technology. This requires more transistors, more powerful processing units, and greater cooling.
The answer to what is a GPU was once an optional part of some devices. Today, a better question is what is GPU functionality not useful for in modern visual applications?
With Moore’s Law now struggling to maintain momentum, we do have questions about what is a GPU market that can't keep up. The PS5 and Xbox Series X are huge, new graphics cards are getting bigger, and GPUs are consuming more power. Consider how this market has evolved:
Like the former generational leaps of discrete processing units, it might be about time for another revolution in the graphical processing arena. With the threat of diminishing returns on the horizon, it’s an exciting time in gaming, where real innovation could be the only step forward, at least for AAA titles. Until then, we're quite happy to keep playing indie and less demanding experiences.