There is a saying that the ideal is very plump, but the reality is very skinny, and this sentence is the most appropriate to describe the current situation of Intel's large and small core architectures.
The ideal is very full, and it is said that in Intel's official test environment, the CPUs based on large and small core architectures have achieved the level expected and envisaged.
The reality is very skinny, and it is said that after completely detaching from the official test environment, the comprehensive performance of CPUs based on large and small core architectures is obviously unsatisfactory.
Are you saying that Intel is not aware of this situation?
Intel must know that it is easy for Intel to create a completely detached from the official test environment, this kind of thing can even be done by consumers, and the vast majority of consumers are faced with this non-ideal operating environment on a daily basis, so why does Intel still have to bite the bullet to bring the CPU of the large and small core architecture to the market?
Because the foothold is different.
First of all, Intel embarked on the route of large and small core architectures mainly to confront and suppress AMD's Ryzen 5000 series, and the rest of the matter is not a big deal after ensuring that its new products can regain dominance of this market.
Secondly, affected by the two-way surge in single-core power consumption and calorific value, the development idea of the whole core has temporarily come to an end, if you want to continue to open up the route of the whole core, you must introduce more refined core process technology, unfortunately at that point in time of the 11th generation core, Intel fell into a time tight, heavy task dilemma, at that time Intel could not realize this idea in a short period of time.
Finally, from Intel's point of view, the fact that the "large and small core architecture" is not easy to use cannot be attributed to Intel. The large and small core architecture must be easy to use, at least in the test environment developed by Intel itself, the CPU based on the large and small core architecture has reached the preset performance indicators, and the reason why consumers say that it is not easy to use is that the PC operating environment they are facing is not very friendly, and the perfect play of many technologies also needs a more perfect operating environment, so Intel just took out some processors that can "fight the future" in advance.
I need to say a few more words about the operating environment.
The number of cores of modern CPUs is increasing, so why is the speed of many applications on computers not much improved?
Dual-core - four-core - six-core - eight-core - ten-core - twelve-core - sixteen-core ......
The number of CPU cores is piling up so fast that it stands to reason that the speed of the application should be improved in this way, but we finally found out that this is not the case, and the application will still be slow when it should be. There is really not much difference between the speed of this year's computer and last year's computer when they run the same software, so is there a problem with our blind pursuit of the number of cores?
In order to figure out this problem, it is necessary to know whether the running speed of a software can be significantly improved does not only depend on the hardware company, you also have to add the software company, only the hardware company and the software company at the same time in the direction of multi-core programming can create a more ideal operating environment for the software.
When you see so many PC products with six cores or more, don't forget that in fact, among all PC user groups, the number of users whose processors have not reached six cores is the largest, including a large number of users, their CPUs are generally dual-core or quad-core, and the number of CPU cores is not too many.
The first thing software companies need to consider is the coverage of their own products, and users with less than six cores will have to optimize and program in the direction of less than six cores at most, and they will not pay too much attention to users with more than six cores. Therefore, even the large and small core architectures need to retain some of the characteristics of all six or all eight cores.
For Intel, the size of the core architecture is not easy to use, it can't be regarded as a thing at all, not easy to use is only relative and temporary, as long as consumers choose to believe in Intel and do not toss around, even if it is based on the size of the core architecture of the Core i9, it is also easy to use, so many cores, so many threads, so many frames, how forced!
It is estimated that many users who have used large and small core architecture CPUs have noticed this phenomenon: small cores are not as good as Intel blows, or more precisely, small cores perform more sloppily in the process of CPU performance, and they can almost give small cores negative scores.
This strange phenomenon has also created a lot of memes used to ridicule Intel's large and small core architectures, such as this most widely circulated copy: one core is difficult, X-core shouts, and X-core onlookers.
Take the Intel 12th Gen Core i5 12600KF as an example.
The Core i5 12600KF is a typical CPU based on a large and small core architecture, with 6 of the 10 cores being large cores (P-cores) and the remaining 4 being small cores (E-cores).
However, it should be noted that the above performance parameter table is very incomplete, and it may be misleading for many novice players.
First, 36ghz~4.9GHz is the turbo range of the large core, and the turbo range of the small core is 28ghz~3.7ghz;
Second, 4The maximum turbo frequency of 9GHz is not for all 6 large cores, and the maximum turbo frequency value is only for 1 or 2 large cores in the architecture of large and small cores, and the rest of the large cores do not reach 49ghz so high;
Third, the large cores occupy less than the small cores, which may not seem that important, but it can somewhat illustrate a problem: Intel's core process technology really needs to be updated.
Based on the above three points, we can roughly see the essence of Intel's large and small cores.
The essence of the large and small core architecture is based on a transformation idea of the all-large core architecture, on the surface, it seems that the number of cores has increased, but the performance of the CPU body still depends on the big core, and the small core is really chicken.
We can also think of the big and small core architecture as this, which is just a reduction in the performance of a large core on top of the whole big core, and the performance of this shrinkage is spread across one or two large cores and the rest of the small cores.
Therefore, compared with the 10th generation and 11th generation Core processors, the performance improvement of the 12th generation Intel Core processor still comes from the single-core (P-core) performance, and the small core (E-core) is only a small incremental expansion, and the enthusiasm of the small core is basically unmobilized for many applications. As mentioned earlier, many applications at this stage are still doing programming and optimization with less than six cores, small cores!who care?
Aside from realizing that small cores are useless. Many game enthusiasts are also keen to toss the operation of shielding small cores, because there are too many small cores to use at all, and it will also have power consumption and heat, so it is better to shield it directly.