Straight to MWC Conversation with Intel How Telco Networks Are Making AI Everywhere

Mondo Technology Updated on 2024-02-29

Our correspondent Tan Lun reports from Barcelona.

Looking back on history, there has probably never been a global technology event with such a focus on AI with the theme of mobile communications. Driven by the wave of AI revolutionary innovation since 2023, every branch of the tech industry is excited to join the AI agenda. All of a sudden, how to make AI ubiquitous has become the most ambitious and cutting-edge question of the times, and as one of the most critical information infrastructures in the world, telecom networks are no exception.

At MWC 2024 in Barcelona, the veteran chip giant Intel was the first to respond to this question. Unlike chip vendors, which focus on rapidly improving computing power, Intel has set its sights on all links, including the core network, access network, and network edge, in order to create a complete set of AI solutions to realize the ubiquity of AI in the entire network field.

Intel's mission is to help solve workload challenges, modernize and monetize infrastructure, and make AI ubiquitous through best-in-class engineering platforms, security solutions, and support for open ecosystems. During the conference, Sachin Katti, senior vice president and general manager of Intel's network and edge business unit, told China Business News.

According to Sachin Katti, Intel is one step closer to realizing "AI everywhere" in the network by launching new products such as fifth-generation Intel Xeon processors and edge platforms.

Energy consumption is the biggest challenge for the 5G core

In a complete mobile network, the core network led by the operator is regarded as the "brain" of the network, because it is responsible for the implementation and control of signal commands, large switches and servers are interspersed, and the core network has become a high-energy consuming part of the network architecture.

According to public data, the proportion of core network energy consumption in operators' network operating expenses is about 20%-40%. Taking China's three major operators as an example, in 2020, China Mobile's energy consumption expenditure was 376600 million yuan, China Telecom is 146400 million yuan, China Unicom was 12.9 billion yuan, a total of 65.2 billion yuan, and the profit of the three major operators in the same year was 14114.8 billion yuan.

In this context, core network energy conservation has become one of the top challenges faced by operators. "For the core network, power consumption, efficiency, sustainability and energy savings are very important. Alex Quach, vice president and general manager of Intel's wired and core network division, told reporters during the conference that this also prompted Intel to reduce core network energy consumption through AI innovation on the chip side.

To that end, Intel announced during this conference that it will launch the next generation of Intel Xeon platform, codenamed Sierra Forest, later in 2024. According to Intel, the platform can deliver a 2.2% increase in single-rack performance compared to existing 5G core infrastructure7 times.

During the conference, the reporter saw at the Intel booth that global operators, including British Telecom, Dell Technologies, Ericsson, HPE, KDDI, Lenovo and SK Telecom, have shown strong interest in this platform.

In addition, in order to further reduce energy consumption and improve energy efficiency, Intel announced during the conference that the power manager software previously launched for the 5G core network has been adopted by the world's leading equipment manufacturers, including Nokia, Samsung, and NEC, and is planned to be commercialized this year. The reporter learned at the exhibition site that the measured data of the equipment can save the CPU of the core network equipment by 40%.

Radio access networks (RFNs) have become more widespread

While the core network platform has entered the AI era, the transformation of the network itself has not stopped. Network virtualization and software-defined networking have been advancing for more than a decade, which has led to the global recognition of the value of RAN (Radio Access Network), which is recognized by the telecom industry as one of the wireless access solutions that can enable 5G ubiquitization.

Dan Rodriguez, vice president and general manager of Intel's Network and Edge Solutions Group, told reporters that Intel is studying how to integrate AI functions into the core network and RAN to improve energy efficiency, server utilization, spectrum efficiency and solve the problem of RAN congestion.

To this end, Intel has been working on the promotion of RAN.

The reporter noticed that at last year's MWC, Intel released the *** Xeon processor equipped with VRAN Boost, and in this year's MWC, Intel once again launched the next-generation Intel Xeon platform codenamed Granite Rapids-D. The processor will take advantage of the optimized Intel**X instruction set to deliver significant improvements in VRAN performance and integrate Intel VRAN Boost, among other architectural enhancements and features.

Currently, the chip is undergoing sample testing. Samsung has already conducted its first call demonstration at its R&D lab in Suwon, South Korea, and Ericsson has demonstrated its first call verification at its California lab.

At the same time, to enable operators and developers to build, train, optimize, and deploy AI models for VRAN use cases on general-purpose servers, the Intel VRAN AI Development Kit will be available in advance to help operators and developers build, train, optimize, and deploy AI models for VRAN use cases.

Intel said it is currently working with AT&T, Deutsche Telekom, SK Telecom and Vodafone to fully demonstrate the benefits that AI can bring to RAN.

The network edge has entered the AI era

While the core network platform has entered the AI era, the large-scale deployment of 5G around the world in the past five years has also led to the rapid growth of the network edge market. According to Gartner**, by 2025, more than 50% of enterprise management data will be created and processed outside of the data center or cloud. IDC estimates that the global edge computing market will increase from 4$400 million, growing to 36900 million US dollars, with a compound annual growth rate of 502%。

This also makes AL a rigid demand for a large number of network edge terminals. "Historically, AL has been concentrated in the data center, and now enterprises are pursuing more opportunities through AL-based automation, which is accelerating this trend. As massive amounts of data explode at the edge, data generated by mobile phones, PCs, or retail stores drives increasing levels of intelligence. Sachin Katti said.

However, achieving AI at the edge of the network is not easy. According to Pall**i Mahajan, vice president and general manager of software engineering in Intel's Network & Edge Group, the cloud and the edge are very different, especially when it comes to AI.

Edge AI is more about inference. Unlike cloud computing, you don't need to set up separate clusters to run AI workloads. Instead, you can run AI inference on top of your existing compute while running your existing software workloads. The edges are also complex and very diverse. From diverse hardware, software, and operating systems, to limited power and space. Pall**i Mahajan told reporters.

Pall**i Mahajan further stated that while there are some custom solutions on the market today to address the complexities of the edge, these solutions are often customized on the basis of closed systems and dedicated hardware, which are difficult to scale and maintain, complex to integrate, and have a high total cost of ownership. As a result, integrating legacy systems and adding new use cases is costly and time-consuming, resulting in more than half of all AI projects failing before they go into production.

To that end, Intel launched an upgraded version of its previous Strata project during MWC as a new commercial edge-native software platform that enables enterprises to develop, deploy, run, and manage edge applications at scale on standard hardware. It is reported that the platform will have built-in integration with OpenVINO's AI inference function, so as to achieve real-time AI inference optimization and dynamic workload scheduling in application deployment infrastructure software.

At the conference, the reporter learned that companies including Amazon Web Services (AWS), Lenovo, L&T Technology Services, SAP, Red Hat, Vericast, Verizon Business and WiPro will join the applications that support Intel's edge platform.

Editor: Zhang Jingchao Proofreader: Zhang Guogang).

Related Pages