It s got to be Intel!Tech companies are all in AI, but these innovations are truly ahead of their ti

Mondo Technology Updated on 2024-01-29

** IT House.

Author |Xiyuan.

Since the beginning of this year, ChatGPT's popularity has allowed generative AI to set off a new wave of global artificial intelligence, which is becoming a new driving force for the transformation of thousands of industries, and it is also the future that major technology companies are betting on.

For example, AMD recently launched the AMD Instinct MI300X GPU, a data center AI chip, and the MI300A accelerated processing unit APU that combines the latest AMD CDNA 3 architecture with the "Zen 4" CPU, which has attracted widespread attention from the outside world.

On the road of exploration of "AI changing the world", there is actually one company that has launched its layout early, that is, Intel. In 2018, Intel proposed to introduce AI on PCs, and also launched the "AI on PC Developer Program" for AI PC developers. Since then, Intel has continued to integrate AI capabilities into its Core processor products, starting from the 10th Gen Core-X, Intel has added AI, deep learning, Xi related acceleration instructions to its CPUs, including improving the performance of AI at the architecture level, building Intel GNA into the SOC to accelerate the application of low-power AI on PCs, etc., and also introducing AI acceleration units into the GPUs of XE and ARC architectures.

Intel's years of exploration will also usher in a concentrated release in the near future. On December 15th, Intel will officially release the Core Ultra processor based on the new Meteor Lake architecture in China, and in the Meteor Lake processor, Intel's most important move is to introduce AL into the client PC and integrate independent NPU units in the Meteor Lake processor architecture to bring independent low-power AI acceleration capabilities.

Specifically, the integrated NPU unit added to Meteor Lake enables more efficient AI computing and includes 2 neural compute engines to better support content including generative AI, computer vision, image enhancement, and collaborative AI. In addition to the NPU, both CPU and GPU can also perform AI computing, and different AI units will be used to cope with and coordinate with each other in different scenarios, so that its overall energy consumption ratio can be increased by up to 8 times compared with the previous generation.

And when the generative AI base is determined to be AI 2After the 0 era, Intel has also made a lot of efforts to make AIGC run better on the local side of the PC.

In our traditional cognition, running a large language model like ChatGPT must be supported by a graphics card with large video memory, such as the Instinct MI300X GPU launched by AMD as we mentioned earlier, but this is indeed a bit far away from the majority of consumers, and Intel In order to make the consumer-oriented generation Core platform can also run various large language models smoothly and provide a smooth user experience, they have built the BigDL-LLM library, which is specifically for Intel The low-bit quantization design of the hardware supports various low-bit data accuracy such as int3, int4, int5, int8, etc., with better performance and less memory usage.

Through this library, Intel optimizes and supports a variety of large language models, including some open source large language models that can be run locally. This library can even run a large language model with up to 16 billion parameters on a machine with an Intel thin and light laptop with 16GB of memory. In addition, it also supports multiple large language models such as Llama llama2, chatglm, chatglm2, etc.

Not to mention the upcoming Core Ultra series, Intel's client chips, represented by 12th and 13th Gen Intel Core processors and Intel Arc A-series graphics, all offer strong performance to meet the high computing power needs of generative AI. In this regard, IT House has also done practical tests.

In the test, I chose a thin and light laptop that has passed the Intel EVO platform certification: ASUS Daybreak Air, which is powered by an Intel 13th Gen Core i7-1355U processor and 16GB LPDDR5 memory.

Install Intel's large language model demo on this ASUS Daybreak Air. This demo integrates three large language models, including ChatGLM2, llama2, and Starcoder. They are all optimized with Intel's corpus.

During the test, I first asked the large model demo to help me open the speech of the host of the company's annual meeting in the story creation mode, and it quickly presented a complete and appropriate opening copy, and the first latency of the whole process was only 12498ms。If you think and edit on your own, it will take a long time, and you can use the AI model on your PC to get it done in minutes.

When writing the copy of the large language model, I took a look at the scheduling of ASUS Dawn Air performance resources, and the 13th Gen Core i7-1355U processor utilization reached 100%, and the memory usage reached 97GB (62%), XE core usage also reached 39%. It seems that this process is indeed done locally. With Intel's continuous optimization and the improvement of the computing power of the 13th generation Core processor, it is indeed possible to achieve the landing of AIGC on thin and light laptops.

Then I tested a problem to extract the core information of a news, and it can also "summarize" the news content quickly and accurately. This is very useful for us to query information and organize reports on a daily basis, which can greatly improve the efficiency of our work.

Finally, let the big model help me write a syllabus of Zhu Ziqing's "Back", which also quickly listed a set of logical, complete and detailed outlines. For those who need to refine and write outlines, such as teachers, it is very convenient to use AI to assist teaching work even when there is no Internet connection.

In addition to CPUs, Intel also pays great attention to optimizing the performance of GPUs, so that GPUs can also play a more important role in AIGC tasks on the device side. For example, for the well-known open-source image generation model Stable Diffusion, Intel has enabled the acceleration of OpenVino, and they have developed an AI framework that can accelerate the running of the PyTorch model with a single line of installation. Stable Diffusion Automatic1111 can be run on both Iris integrated graphics cards and Arc discrete graphics cards through Stable Diffusion's WebUI.

In real-world testing, you can see how Stable Diffusion performs on integrated graphics on the ASUS Daybreak Air thin and light laptop. The 96eu version of Intel Iris XE graphics has the powerful computing power to support FP16-precision models running on Stable Diffusion software to quickly generate high-quality models**. Let it generate a "man watching TV", on the ASUS Daybreak Air, it only took more than 1 minute to "release the film smoothly".

In the process of building, IT Home also saw through the performance explorer that the GPU occupation was 100%, and the CPU was also occupied by 15%, which shows that this ** is indeed rendered locally by GPU.

In the past, it was hard to imagine that thin and light laptops could have such performance, but with the progress of 13th generation Core processors in terms of performance, power consumption, and the significant improvement of Iris XE Graphics (96EU) in FP16 and FP32 floating point performance, and the addition of INT8 integer computing power, these have greatly enhanced the overall AI graphics computing power of the GPU. This is an important factor in why thin and light laptops like ASUS Daybreak Air can also run Stable Diffusion well on the local side.

And in the Intel Meteor Lake processor we mentioned at the beginning, the GPU core graphics performance will be further improved, with 8 XE GPU cores, 128 rendering engines, and 8 hardware ray tracing units, and will also introduce asynchronous copy of ARC graphics cards, out-of-order sampling and other functions, and DX12U has also been optimized.

From the perspective of the development of AI to change the world, Intel's efforts to widely introduce AI into PCs and lead hundreds of millions of PCs into the AI era are of great significance, because at least in the foreseeable future, PCs are one of the most important productivity tools for human beings The productivity attribute can be reborn, and the change of personal computing will further evolve into the transformation of the whole society's productivity.

All of this is a testament to Intel's leadership in the AIGC space. Their continuous innovation provides users with a more intelligent and efficient computing experience, and promotes the development and application of artificial intelligence technology. We believe that with the continuous advancement and improvement of technology, we can expect to see more and stronger AI applications and solutions from Intel in the future, so that we can move faster into the era of AI-driven productivity liberation.

Related Pages