Central Processing Unit (CPU) technology has been evolving since the application of the integrated circuit in 1960. Many of us were first introduced to CPU technology with single-core devices. The first dual-core designs didn’t make it to market until about 2005 in the Intel Pentium D. From there, the race was on, with quad-core, octa-core, and other densities following in rapid succession. Today’s modern performance CPUs routinely carry 12, 16, or even 24 processor cores.
The quest for an ever-increasing number of cores centers on improving aggregate processing, or the ability to execute more instructions in parallel. Adding more cores will improve a system’s processing performance, but doing so also increases expenses and increases power consumption. What if there were a better way to optimize processing capability, performance, and efficiency without necessarily needing to add more cores? Intel, long an industry leader in CPU development and innovation, believes that they have the answer. Starting with their 12th generation 13th generation devices, Intel has been segmenting processor cores into two classes: performance cores (P-cores) and efficiency cores (E-cores).
- P-cores are those found in traditional multicore architectures where all cores are basically identical to one another in terms of clock speed, power consumption, and processing capacity. They are performance oriented, intended to take care of larger processing needs.
- E-cores on the other hand are more power efficient and smaller, designed to to manage background and lighter processing duties. These cores run all the time but don’t require powerful processing capability to do their jobs.
Performance Hybrid Architecture
P-cores and E-cores fit into a unique performance hybrid architecture that integrates the two types of cores into a single die. Put simply, the hybrid architecture is a multicore technology optimized to best manage multiple processing workloads efficiently whether they are single threaded, partially threaded, or multithreaded. When P-cores are executing intense processing demands (think gaming, machine learning, or machine vision), the system prevents background tasks from interrupting or occupying those cores, resulting in a more robust and consistent processing experience. Conversely, lighter duty background or everyday computing tasks are assigned to the E-cores, saving power without sacrificing user experience and leaving the P-cores at the ready for bigger tasks. This division of labor achieves maximum performance across a wide range of processing scenarios.
The magic here comes together with the help of the Intel Thread Director, a controller of sorts built directly into the hardware and that is responsible for ensuring that the P-cores ad E-cores work together seamlessly. It leverages machine learning as opposed to static instructions to make scheduling decisions for tasks resulting in optimal workload distribution and balance across the CPU by assigning the best core for any given task, whether performance or efficiency.
Why Have Two Types of Cores?
We’ve touched on the benefit of using two types of cores in one CPU for dividing processing tasks, but beyond this, what’s so important about this architecture? Modern multicore CPUs, where all cores are the same (essentially every core is a P-core) offer the same performance characteristics per core, and each consumes the same amount of power. Even when the CPU is idle, the power draw can be significant. In an industrial scenario using workstations and servers, with everything plugged into a never-ending power supply, this is less of an issue aside from perhaps having to manage the heat generated by the system.
When we shift to smaller, embedded systems, battery-powered mobile systems, or industrial PCs with limited ability to dissipate heat, reducing CPU power consumption is a game changer: this is where E-cores show their true value. Most CPUs are in a state of low processing demand most of the time. For these devices, running an operating system and background tasks on a smaller and more power-efficient processor helps solve thermal management and battery stamina challenges. Mobile applications — particularly those in robotics and autonomous vehicles — stand to benefit greatly from this advancement.
Hybrid Architectures Here to Stay
Smartphones, tablets, and other personal electronic devices have utilized CPUs with different classes of CPU cores for several years, and this technology is largely what enables those devices to maximize battery longevity. Intel has advanced that concept by bringing hybrid performance architecture to x86 desktop CPUs, opening new opportunities for high-performance processing within existing and new applications, broadening CPU selection and computing power options.
CoastIPC offers a wide range of industrial computing solutions from major manufacturers such as Neousys and Advantech as well as custom and value-added solutions and services. We carry several products supporting Intel processors, and the professionals at CoastIPC can work with you to understand your unique application needs and can propose a computing solution that meets your requirements and budget.
For more information about CPU types and the benefits and advantages of each as they pertain to your application, please contact our product experts by emailing [email protected], submitting a form here, or calling 866-412-6278.