What really are the differences between ARM and x86 processors? Microprocessor architecture is traditionally confined to highly technical discussions of engineers and developers, though broader awareness of the concept has risen in recent years thanks to the growing popularity of ARM (and the relative stagnancy of x86). But understanding how the two compare requires a basic grounding in a few computing concepts.
ARM and x86 don’t technically refer to processors themselves, but rather the instruction sets computer processors use. An instruction set is best thought of as a list of commands (instructions) a microprocessor can execute, and those lists constitute an Instruction Set Architecture, or ISA. ARM and x86 are both families of ISA, and the two most popular ISAs in use today. There are fundamental differences between the ARM and x86 ISAs that make them “incompatible” — that is, code written for x86 processors will (usually) not run natively on ARM processors. This incompatibility means that each ISA has evolved its own ecosystem, not just of microprocessors, but of operating systems, peripheral manufacturers, devices, and end user software.
In practice, many companies refer to ARM and x86 processors — that is, hardware, not an instruction set — as shorthand for processors that are designed for those instruction sets. This makes things a bit simpler, as a given processor is designed for only one instruction set. While we may use the terms interchangeably to refer to both the instruction sets (ISAs) and processors in this article, it’s still a distinction worth understanding.
ARM vs x86 History
ARM and x86 both emerged in the 1970s, though their paths to popularity were quite different. x86 was an invention of the Intel Corporation, and its microprocessors rapidly gained popularity in IBM PCs in the early 1980s. Combined with the rise of Microsoft’s DOS operating system, x86 processors dominated the corporate and personal computing market by the early 1990s, achieving a near-total monopoly by the 2000s. Today, most desktop and laptop computers use x86 processors (excluding those built by Apple, which now use ARM), along with many other form factors like servers, industrial computers, edge devices, embedded systems, supercomputers, scientific equipment, and more.
ARM started as an offshoot of the RISC instruction set and was first commercialized by the Acorn Computer company in the UK with the ARM1 processor in 1985. ARM’s real growth, though, would take decades to unfold. (Fun fact: Apple was an early adopter of ARM, with the original Apple Newton MessagePad using an ARM processor in 1993.) ARM processors found popularity around the turn of the new millennium with the rise of mobile phones, and the transformative moment of Apple’s iPhone (powered by an ARM-based chipset) — and later Google’s Android platform — catapulted ARM processors from relative obscurity into mainstream computing relevance. Today, all smartphones ship with ARM-based processors. ARM processors are also increasingly common in laptops (all MacOS computers use Apple’s ARM-based processors), and they’re largely dominant in the IoT ecosystem (wearables, smarthome, sensors, security, medical devices, kiosks, and more). There’s even an emerging ARM server market.
Which is Better, ARM or x86?
Both ARM and x86 have various advantages and disadvantages, the technical underpinnings of which are too complex to explain in the context of this article. But there are some top-level strengths and weaknesses you should know — the two architectures are uniquely suited to certain use cases and solutions.
x86 Advantages
Legacy software, virtualization, and specialty systems: Because x86 was the dominant computing architecture for the last 40 or so years, it established a massive ecosystem of software and hardware support. Only in recent years has the rise of ARM on mobile really challenged this dominance. As a result, x86 still enjoys a massive advantage in legacy software support. For organizations that maintain a large number of systems that utilize software that is highly specialized or that have extensive needs around virtualization, legacy systems support, or strict vendor sourcing policies, x86 is still the only real option.
Integrators and vendors: When it comes to the laptop, desktop, and server computing markets, x86 affords access to a much larger ecosystem of builders and systems integrators. Much of this is owed to x86’s long dominant position in these markets. If you need a device or system designed to meet an exact cost or performance specification, x86 generally gives you more ways to get that result — provided it’s within the constraints of typical x86 form factors. (For example, you won’t find x86 smartphones or wearables, and ultra low-power IoT applications are a poor fit for x86 generally.)
OS compatibility: x86 processors can run Windows, of course, but they’re also compatible with effectively all distributions of Linux and Linux-like operating systems, as well as specialty OSes like real-time operating systems. For applications where building your own solution from the ground up is crucial — for example, a dedicated server — x86 gives you the broadest range of platforms to choose from.
Peripheral compatibility: For uses requiring peripherals like PCI cards, external monitors, printers, industrial or scientific sensors, networking devices, and other specialty external hardware, x86 is often the safest choice for compatibility. Not necessarily because x86 supports more peripherals as a technical capability, but because the operating systems and hardware platforms that are typically used in conjunction with x86 (e.g., Windows) do.
ARM Advantages
Efficiency (performance per Watt): Computer processors are generally measured not just in terms of their overall computational ability, but in the amount of power they consume per measure of performance. ARM processors are massively more efficient on this metric because of their reduced instruction set and the highly integrated approach to ARM chipset design (by companies like Qualcomm, MediaTek, and Rockchip). As a result, in applications where power consumption is a make-or-break consideration — especially mobile applications — ARM often not just beats x86, but is in another class entirely.
Mobility: Modern ARM processors are designed from the ground up for mobility use cases. Their low power consumption, highly integrated system-on-a-chip (SoC) design, and small physical footprint (almost all ARM processors are passively cooled) mean ARM processors can fit in almost any physical design envelope. ARM processors range from “very small” to “absolutely tiny” — ARM processors power ultra-compact smartwatches and IoT sensors, form factors that simply aren’t practical for x86 given its power consumption and physical footprint. They’re also designed to be wirelessly enabled, with most chips supporting a large array of connectivity options.
Hardware integration: Many ARM chipsets come out of the box with a huge suite of connectivity and sensing capabilities. Bluetooth, Wi-Fi, GPS, cellular, environmental sensing (pressure, temperature, gyroscope), and integrated controllers for functions like display, power management, and RF are often packaged on a single board the size of a few postage stamps. While not an advantage of the ARM instruction set, per se, this is a practical result of ARM’s mobility-first use case — smartphones require a huge array of integrated communications, sensors, and controllers in a highly constrained space, and modern ARM chipsets have coalesced around this use case. This means the “out of the box” functionality of most ARM chips is incredibly high, obviating systems integrator vendors and giving device manufacturers immense flexibility in end product capability and packaging.
Cost: While super-premium ARM smartphone chipsets from brands like Qualcomm are by no means “cheap,” ARM chipsets start at far lower price points than similarly capable x86 systems. Economies of scale and a small physical footprint have favored ARM economics, as well as extremely price-competitive use cases like smartphones, IoT, and wearables. Even excluding Apple’s proprietary A and M-series processors, the number of ARM chips manufactured annually far outstrips x86. As a result, the financial barrier to entry with ARM chips is very low, and will be for the foreseeable future.
Dedicated Devices: A Comprehensive Hardware Buying Guide
Why Aren’t ARM and x86 Compatible?
This is a tricky question to answer in a very technical sense, but if you think of ARM and x86 as those “lists” of executable commands we discussed earlier, you’ll understand on a basic level why software written for x86 doesn’t run on ARM, and vice versa.
ARM is derived from the RISC ISA, an instruction set specifically designed to reduce the number of instructions available to a processor to the absolute minimum necessary. This was done for the sake of maximizing efficiency and performance. RISC was “reducing” from the instruction set that would become x86, in effect constituting a subset of the x86 ISA. Code written for x86 processors would, therefore, contain instructions that ARM processors could not execute by design. Similarly, code written for ARM processors is not compatible with x86 processors — ARM isn’t merely a stripped-out version of x86, but a full evolutionary divergence that began all the way back in the 1970s.
Today, x86 and ARM remain incompatible. However, some software and operating systems vendors have worked to develop codebases for both architectures — for example, Windows is distributed by Microsoft for both ARM and x86 systems (though, notably, by no means do x86 Windows programs run natively on Windows for ARM, this is only possible through a kind of “emulation,” a subject too large for this article). While the perceived gap between these two architectures is slowly diminishing from the perspective of the end user, they remain fundamentally different and technically incompatible.