AMD Battles Intel and NVIDIA for CPU and GPU Dominance

If you’re interested in the GPU, CPU and APU space, this is an interesting moment in the history of desktop graphics cards and CPUs. NVIDIA is winning praise for seeing the connection between artificial intelligence (AI) and GPUs before anyone else did, but AMD (Advanced Micro Devices) has come roaring back to life and is suddenly poised to take a 40 percent share of the CPU market from Intel’s grip with its new Threadripper CPUs.

But out of all three chip making competitors, AMD stands out because it makes both a CPU and a GPU, compared to NVIDIA and Intel, which make only either GPUs in the case of NVIDIA, or CPUs in the case of Intel. Seems like it would be hard to dethrone both companies, given that NVIDIA and Intel have more resources at their disposal.

If you’re in the position of chipmaker AMD, you design, engineer and produce both GPUs and CPUs, and compete with the top producers of either market. That’s quite a challenge. The science behind adding more and more processing power to a chip is fascinating and has come a long way over the years.

History of the GPU

  • Since the 1970s, when video games were introduced and grew in popularity, the wonderment provided by early graphics processing gave way to an endless and never-ending tidal wave of demand for better performance and better content.
  • At that time, game consoles weren't the microprocessors we see in our GPUs today. Instead, they were video controllers that were coded to perform visual effects in a specific way for each different game.
  • When the first central processing units (CPUs) began appearing on consoles and computers, they, of course, also handled the graphics processing. In the mid-1980s, the abstract concept of having a separate digital organ for processing spatial data began to surface. Known as a discrete GPU, it would process offloaded graphics rendering tasks from the CPU, relieving some of the central processing workload.

The TMS34010, which was released in 1986, was among the first microprocessors designed to process offloaded graphics rendering separately from the CPU.
  • The original Macintosh 128K computer popularized the graphical user interface (GUI), which meant that more computation would have to be programmed and more hardware would need to be allotted for just the baseline visual display.
  • As personal computers grew in popularity, IBM’s PS/2 computer, which was released in 1987, included an 8514/A video graphics card as an optional upgrade to the Video Graphics Array (VGA). The 8514/A was different from graphics coprocessor boards that were commonly found in pricey workstation computers in that it was designed to be less programmable, which improved the cost-to-performance ratio, thus making it attractive to the masses. This strategy worked, and the design popularized the 8514/A as the first popular fixed-function accelerator that supported 256 colors and took offloaded rendering jobs from the CPU such as drawing lines.
  • The 8514/A was so popular that clones began to surface. Canadian-based ATI released the Wonder series of graphics cards, which was among the first to support multiple monitors, as well as switch graphics modes and change resolutions, which was relatively new at the time. APIs like OpenGL and DirectX came along in the first half of the 1990s, allowing coders to program for different graphics adapters.
  • The original PlayStation console was really the first system to support 3D graphics, and it was a monster hit that set off a race to bring GPUs to PCs.

The 3DfxVoodoo was a discrete GPU for gaming only and the first to support a multi-GPU setup.
  • In 1999, NVIDIA released the GeForce 256, which allowed the CPU to offload tasks like lighting and transformation that mapped 3D images onto 2D GUIs. By 2001, many graphics card makers had gone out of business, basically leaving NVIDIA and ATI alone to duke it out for market dominance.
  • Between 2001 and 2004, pixel shading became common in the products of both brands, with ATI playing catch up, and the PCI Express interface replacing the older and slower AGP interface.
  • In 2006, ATI was acquired by AMD for about $5 billion. NVIDIA rolled out the popular GeForce 8800 GTX GPU, which used a lot of power and featured an enormous number of transistors, a unified shader that could process multiple graphics effects simultaneously, and stream processors that allowed graphical tasks to be parallel processed. This was used for general-purpose GPU computing.
  • There’s obviously a lot more to the evolution of the GPU, but this brings us to a point where you can see the path to our current GPU battle between AMD and NVIDIA.

Behind AMD’s Recent Success

Part of the huge amount of news coming out of the most recent SIGGRAPH event was the announcement that the Vega 10 GPU will be included in the Radeon RX Vega 64. It became available on August 14, and prices range from $399 to $599 (although there was a bit of confusion about a somewhat misleading introductory offer that shot up $100 a few days after the product launch).
The Radeon RX Vega 64, which is priced at $499, is primarily for high-end gaming, and is intended to give NVIDIA’s GTX 1080 a run for its money. (Image courtesy of AMD.)
The performance reviews for the Radeon RX Vega 64 are mixed so far among online communities, with most reviewers saying that the performance is negligible to NVIDIA’s GTX 1080, if not slightly worse.

The Radeon RX Vega 56 hasn’t been reviewed quite as extensively, but it’s being touted by the company and some individuals as an alternative to NVIDIA’s GTX 1070.

A Word About Engineering GPUs for CAD Versus Gaming

GPUs designed for gaming and CAD are mainly different when it comes to precision. A gaming GPU is designed to handle much lower polygon counts in 3D model geometry than a CAD GPU. CAD applications cannot rely on bump maps to approximately fill in geometry, as everything should be incredibly precise.

Engineering giants like Boeing or chipmakers like AMD make models of very small components that can have a billion polygons, for example, if the part is complex enough.

Accuracy, double precision calculations and 3D data integrity are 100 percent required to ensure an inherent minimization of cost overruns, which are production issues for engineers. Double precision calculations, accuracy and data integrity are absolute requirements for complex design and engineering of sophisticated parts and components.

GPUs for CAD workstations have intense requirements to fulfill, and the computational burden is lessened by application developers who build and finish custom driver implementations.

AMD Strikes Hard with Threadripper 1920X and 1950X CPUs

Compared to the mixed but relatively positive response of reviewers toward AMD’s Radeon Vega 64, its recent CPU release, the Threadripper 1950XCPU is getting rave reviews from every corner of the Internet. This may have Intel sweating a little bit, but keep in mind, Intel has just revealed its 8th Generation CPUs with an August 21 Facebook Live event.

We’ll wait to see the details on what Intel has unveiled, but the main reason that AMD is shellacking Intel in this space is twofold: price and performance. For $1,000, AMD’s Threadripper 1950X CPU performs better than its Intel equivalent, the Intel Core i9-7900X.