Understanding Hardware: What is a CPU, and What Does it Do?

Cam Summerson
|
Try Esper for Free
Learn about Esper mobile device management software for Android and iOS

In the hierarchy of computer hardware, there’s probably nothing more important than the CPU. This is effectively the “brains” of the entire system — just as the human brain controls the body's functions, the CPU controls the functions of the computer. Regardless of how loosely we’re defining “computer” here, smartphones, tablets, laptops, desktops, and beyond qualify. 

Hardware Guide

What is a CPU? 

CPU stands for Central Processing Unit. In modern application, the CPU goes by many names: chip, processor, chipset, and even just “silicon.” Most of these terms are at least loosely accurate, and it’s reasonably safe to assume that if you hear these in technical conversation, they all refer to the same thing. 

Common examples of the CPU in current technology are Intel i-series chips in many desktop and laptop computers or the Qualcomm Snapdragon series in smartphones and tablets. There are a variety of other manufacturers in the CPU space, like AMD, MediaTek, and lots of smaller vendors. 

What does a CPU do? 

As outlined above, the CPU is the brain of any computer system. For a more straightforward comparison, you can think of the CPU as the conductor of an orchestra, directing each piece of hardware on what to do and when to do it. The CPU executes everything you do on a device — from simple keystrokes to opening files, playing games, or editing photos. 

To put this into perspective, let’s pretend your computer is a restaurant. The restaurant has different parts, like the lobby, dining room, and kitchen. In this case, the kitchen is the CPU: requests from other parts of the restaurant come to the kitchen, where the meal is prepared. We’ll expand on this analogy in more detail later, but this is the foundation of the concept. 

The CPU’s sole role is to execute complex instruction sets and perform millions or billions of calculations per second. It does this by utilizing specific components like the ALU (Arithmetic Logic Unit) for mathematical operations, the Control Unit for instruction interpretation, and registers for temporary data storage. Instructions are pulled from memory, decoded, executed, and results are pushed back into memory registers. Each of these steps sequentially creates what’s called a “clock cycle.” 

What is clock speed and why does it matter? 

A CPU’s clock speed is measured in Herts (and subsequently Megahertz, Gigahertz, etc.) and determines how many clock cycles it can complete in a single second. So, logically, the higher the clock speed, the faster the processor. But, as things often are with computers, it’s not as simple as faster = better. 

That’s because of threads. This is the basic unit of execution within a processor and how the CPU handles tasks. Understanding CPU threads can quickly get complicated, so let’s simplify them as a concept. Let’s say a chef is cooking a meal in our CPU-themed kitchen. He could cook one thing at a time, then move on to the next — so, if he’s making chicken fettuccine alfredo, he’d first cook the chicken, and when that’s finished, cook the pasta, and when that’s finished, cook the sauce. 

Now, imagine the chef has four arms. This way, he can cook the chicken, stir the pasta, and make the sauce at once. In this scenario, the chef is the CPU, and his four arms are the threads. 

Threads allow the CPU to work on multiple things simultaneously to save clock cycles. Since the clock speed determines how fast a device can complete a cycle in a second, and cycles are composed of many threads, a faster clock speed means that the CPU can execute each thread faster. 

But it doesn’t stop there. Enter multithreading. 

What is multithreading? 

Let’s return to our chef analogy. Our guy has four arms, so he can cook four things at once. Let’s say a rush order comes in for lasagna to go along with the chicken fettuccine alfredo. He could wait until he’s finished with the current meal to start the other one, or he could sprout four temporary arms from each of his existing arms to get even more done. 

This is multithreading (Intel refers to this as Hyperthreading). Multithreading allows the most advanced CPUs to generate virtual threads, so it can do more without any additional hardware. It’s more efficient and faster without increasing clock speed. 

Got it so far? Cool. Let’s make it more complicated. The CPU isn’t technically just a single brain — most modern CPUs are comprised of multiple processors, called cores. 

What is a CPU core? 

If you’ve read anything about CPUs (even in a list of specs), you’ve likely read something like “quad-core” or “octa-core” before. This indicates how many processing cores the CPU has. A quad-core chip has four separate cores, an octa-core has eight, and so on. Each core works independently of the others, almost like the computer has four separate processors that can talk to each other. 

Since each core operates independently, each has its own clock cycles and threads. Sometimes, the cores may work on totally different operations, but other times, they may work together on a more intensive process from a single application. 

In our chef analogy, this is like having four chefs with four arms in a single kitchen. The kitchen represents the CPU, while each chef is a processing core, and each arm is a thread. Starting to click now? 

One final note before we move past how a CPU works: while every CPU has a defined clock speed, it doesn’t mean each core operates at the same level. For example, some quad-core processors may have two faster cores and two slower cores. This is called big.LITTLE architecture and allows the faster, big cores to focus on heavy tasks without wasting time or resources on simpler tasks, which are handled by the slower, LITTLE cores. 

Understanding CPU architecture

CPU graphic

The term “CPU architecture” is easy to overthink. The simplest way to understand CPU architecture is to think of it like a blueprint — the overall design each CPU uses. Just like with buildings, not all CPUs are designed the same. They’re all CPUs, but the way they function can vary dramatically. Think about the difference between a car, boat, and motorcycle — all vehicles, but very different in nature. CPUs are kind of like that. 

The CPU architecture is designed mainly around the components we’ve discussed so far — cores, clock speed, instruction sets, memory, etc. There are various CPU architectures, but the most common are x86 and ARM.

x86 vs ARM

In the hierarchy of modern hardware processors, x86 and ARM stand above the others. These are the two most common types of processors found in modern computers and mobile devices, respectively. There is some nuance there, which we’ll discuss in just a bit. 

What is x86?

Intel developed the x86 architecture in 1978 with the original 16-bit processor. In 1985, it evolved into 32-bit processing, which lasted for nearly 20 years until the introduction of 64-bit processors in 2003. You’ll find x86 processors commonly used in desktop and laptop computers, as they’re designed to be powerful, process-heavy chipsets. Because of this, heat management and battery usage aren’t optimized for mobile devices like smartphones and tablets. 

x86 processors use the CISC (Complex Instruction Set Computer) design philosophy, which uses a large set of complex instructions to handle multiple operations within a single instruction. These chips can handle tasks that require significant processing power from a single thread, making them ideal for complex, power-hungry applications. Most modern x86 processors also support multithreading, further increasing performance. 

What is ARM?

On the other side of that coin, you have ARM chips. ARM Holdings introduced these in 1983 as the first RISC (Reduced Instruction Set Computer) processors. In fact, ARM stands for Advanced RISC Machines (though the original acronym was “Acorn RISC Machines” and changed to “Advanced” in 1990). 

The RISC design philosophy is the opposite of CISC, emphasizing smaller, simpler instructions, each performing a single operation. Since instructions are smaller, they are more quickly executed, enhancing energy efficiency. Because of this, the ARM architecture is perfect for mobile chipsets for smartphones, tablets, IoT devices, and more. The big.LITTLE architecture mentioned earlier is most commonly used in ARM chips, as most don’t offer multithreading. This way, the big cores handle high-priority threads, and lower-priority or less intense threads go to the little cores. 

CPU vs. GPU: What’s the difference? 

Where CPU stands for Central Processing Unit, GPU similarly stands for Graphics Processing Unit. While the two may sound similar, they handle very different roles within modern computers, smartphones, tablets, etc. You can think of the CPU as a more generalized piece of hardware and the GPU as a more specialized component within the hardware hierarchy. 

Since we’ve covered what the CPU goes pretty thoroughly, let’s focus on the GPU here. If you’re a gamer of any kind, you’re likely already familiar with GPU — also called graphics cards (the two are essentially interchangeable) — as its primary purpose is to handle graphic duties. The design of the GPU is highly focused on graphics rendering for both video and image processing. 

One of the main differences between CPUs and GPUs is how they handle data processing. CPUs are optimized for single-thread handling, whereas GPUs are optimized to handle thousands of tasks simultaneously. That makes GPUs optimal for tasks like image and video processing, scientific simulations, and machine learning (ML). In fact, the GPU handles most ML applications, which is an area that NVIDIA has heavily invested in recently. 

But here’s the catch: many CPUs have built-in GPUs. These are called “onboard GPUs” or “integrated GPUs.” They’re often good enough for everyday tasks handled by the GPU but aren’t up to high-powered functions like video editing or gaming. For that, you need a dedicated or “discreet” GPU.

Whether you actually need a discreet GPU is determined exclusively by your use case. In most business cases, especially for dedicated, single-use, or company-owned devices, the odds are low that you’ll need a dedicated GPU. The integrated GPU in many modern processors is more than enough for these scenarios. 

How to choose the right CPU for your application

So, that’s a lot. We get it! And while you shouldn’t have to remember all that, context is important when choosing a suitable processor for your use case. Ultimately, it comes down to one primary consideration that you can branch from: x86 or ARM. 

To approach that decision, you need to make a couple of conclusions. To start, do you need the device to be portable with an option to run on battery? Secondly, how heavy are the applications you need to run? The first question is straightforward. The second one, however, is a little more nebulous. Let’s break it down. 

When we talk about how “heavy” an application is, we’re talking about what it needs to run. To put that into perspective, a text editor is incredibly lightweight — it requires minimal resources to run efficiently. Conversely, video editing software is highly complex and requires significant power to run efficiently in most cases. While it’s unlikely that your devices will fall into either of those scenarios specifically, it should offer insight into how to approach the concept. 

Sometimes, the form factor will choose for you. If you know you want to run smartphones or tablets, you’re almost certainly looking at ARM processors. If you need a super-powered kiosk or powerful point-of-sale system that requires a full desktop computer, then x86 is probably the better option. 

There are, of course, outliers to this philosophy. For example, most modern Apple hardware (especially iPads and Macs) runs on Apple’s own M-Series silicon. These ARM-based chips essentially bridge the gap between x86 and ARM chips in terms of performance — they’re some of the most powerful processors in consumer electronics today. 

Need help deciding whether x86 or ARM is right for your application?

Check out our in-depth resource that compares these chip architectures for dedicated device use cases.

How to benchmark a CPU (and how to get it right)

If you want to gauge CPU performance, especially relative to other CPUs, benchmarking tools are the easiest way to quantify that. Benchmarking is a way to evaluate and measure the performance of a given tool (not just CPUs, but also GPUs) by using a series of standard workload tests. These generally involve measuring performance across a range of metrics, including single-core and multi-core performance, and offering comparisons to similar processes or GPUs. 

While this can be a helpful tool, it’s worth noting that plenty of companies, both device manufacturers and chip makers, have been caught “cheating” CPU benchmarks to suggest better results than are repeatable in the real world. So, while you should take benchmarking results with a grain of salt, they still have their place when measuring performance and comparing results. 

If you’re interested in giving it a whirl, here are some suggestions for how to get the most out of your benchmarking tools. 

  • Standardize your testing environment: Do your best to keep all the controllable variables, well, controlled. Try to compare CPUs across the same operating system and version (ideally, the one you’ll be using daily — Windows and Android benchmark very differently, for example). Keep the software configurations as similar as possible, including any applications running in the background, uptime, etc. The physical environment in which you test should also be consistent, so having a test lab is ideal. 
  • Stay with the same tool: Using the same benchmarking software across devices will give you a level playing field for testing. While testing the same processor across multiple operating systems isn’t ideal for comparing CPU performance, it is beneficial when comparing how a given piece of hardware will perform across different OSes. For example, if you’re considering flipping a Windows x86 system to Android, using the same benchmarking tool on both will give you a good idea of which OS will perform better on the same hardware. 
  • Monitor thermal performance: Many CPUs throttle performance when the temps rise, which can artificially hinder performance — monitoring the temperature of your CPU when benchmarking can help you pinpoint this. 
  • Test multiple times and keep thorough documentation: A single benchmark test is a good starting point, but performance isn’t linear. Performing various tests and documenting and averaging the results will give you a more accurate idea of actual performance. 

Recommended CPU benchmarking tools

There are a variety of benchmarking tools available today, and they’re not all created equal. Here is a short list of software to consider, but it’s by no means exhaustive. 

  • Geekbench: Geekbench is a standard benchmarking tool that measures both CPU and GPU performance and is available across Windows, Android, and iOS for quick comparisons across operating systems. It tests single-core and multi-core performance, memory, and more. 
  • AnTuTu Benchmark: AnTuTu is the go-to benchmarking software for many mobile devices, including Android and iOS. It tests CPU, GPU, memory, and UX.  
  • PCMark: PCMark is a powerful benchmarking tool “for the modern office.” The full suite of tests is designed for Windows, but an Android version is available that performs various performance and battery life tests. 3DMark is a GPU benchmarking tool that’s part of the PCMark family and is available for Windows, Android, and iOS. 

We prefer benchmarking with Geekbench since it’s so ubiquitous. It covers the most common tests across devices, and its wide availability across operating systems makes it an excellent choice for testing a variety of use cases. 

CPUs are complicated, but you don’t have to go at it alone

As you probably already realize, there’s more to picking the right CPU than just looking at a couple of numbers and saying, “Bigger is better, so let’s get this one.” This is often the starting point for many device journeys, so it can be overwhelming. But that’s why we’re here. Whether you’re trying to set up a new device program or expand an existing one within your organization, we’re here to help. Our hardware experts can help you pinpoint exactly what you need for your given use case — from the CPU and beyond. 

Hardware Guide

More Hardware Resources:

FAQ

No items found.
No items found.
Learn about Esper mobile device management software for Android and iOS
Cam Summerson
Cam Summerson

Cam is Esper's Director of Content and brings over 10 years of technology journalism experience to Esper, including nearly half-a-decade as Editor in Chief of a technology publication. He currently oversees the ideation, execution, and distribution plans for numerous types of content from blog posts to ebooks and beyond.

Cam Summerson

Esper is Modern Device Management

For tablets, smartphones, kiosks, point of sale, IoT, and other business-critical edge devices.
MDM Software

Kiosk mode

Hardened device lockdown for all devices (not just kiosks)

App management

Google Play, Apple App Store, private apps, or a mix of all three

Device groups

Manage devices individually, in user-defined groups, or all at once

Remote tools

Monitor, troubleshoot, and update devices without leaving your desk

Touchless provisioning

Turn it on and walk away — let your devices provision themselves

Reporting and alerts

Custom reports and granular device alerts for managing by exception