Written byNine zero.
EditLiu Baohua.
DesignYouth.
The intelligent driving market in 2023 is very lively, with hundreds of schools of thought competing at the Shanghai Auto Show in front of it, and players from all walks of life chasing each other and dissing each other.
It can be seen that intelligent driving has become a must for major OEMs to occupy the market, and it is also a hot spot for many leading companies. Traditional OEMs, new car-making forces, traditional tiers, technology companies, Internet companies, and even consumer electronics companies are vigorously deploying intelligent driving business, striving to have a place in the fierce automotive intelligent track.
So, from last year to this year, what are the hot spots in the intelligent driving industry? What's trending? What is the whole industry promoting and exploring together? What are the advantages and difficulties? What is the current state of the market?
With these questions, this article takes stock and interprets several hot spots of intelligent driving one by one: urban NOA, high-precision maps, lidar, large-scale models on the car, and cabin-driving integration, and gives interpretations from the technical and market levels.
Urban NOA, progress is not as expected
Since the development of intelligent driving, urban NOA, as the strongest function of mass production intelligent driving, has become a hot spot in the industry and the focus of various players.
Since 2021, under the influence of Tesla, Xpeng and other leading EV manufacturers, OEMs have successively launched high-speed NOA functions suitable for highway and urban expressway scenarios. From 2022 onwards, the application scenarios of NOA will be promoted from high-speed to urban areas, and thanks to the popularity of "BEV+TRANSFORMER", 2023 will be a year for urban NOAs, and many car companies will release the "Open City Plan" of urban NOAs.
At present, urban NOA has become a hot spot in the market, and it has been installed on new models one after another. According to the statistics of Zosi Automobile Research Institute, from January to September 2023, the penetration rate of high-speed NOA in domestic passenger cars is 67%, an increase of 2 year-on-year5 percentage points; The penetration rate of urban NOA is 48%, an increase of 2 percentage points year-on-year.
2021-2023 NOA penetration rate of domestic passenger cars (data**: Zosi Automotive Research Institute).
As of January 2024, the main car companies that have mass-produced and equipped with urban NOA are Xpeng, Huawei (Jihu, AVATAR, Wenjie), Ideal, Zhiji, etc., in addition, Tesla has pushed urban NOA in North America, and NIO and Wei brand (Momo plan) urban NOA have been announced and are in road testing. According to the summary results in Table 1, on the whole, there is a certain gap between the progress of the implementation of urban NOA and the goal of publicity and release, which has not met expectations, especially since some car companies once claimed to be available nationwide by the end of 2023, but in fact, only a limited number of cities have been landed. However, they are also working hard to promote the full implementation of urban NOAs and strive to occupy more market share in 2024, which is also worth looking forward to.
Xpeng named the city navigation assistance as the Urban NGP (N**igation Guide Pilot), which is available on the P5, G9, G6, and P7i. These 4 models are all equipped with lidar models, and starting from G9, Xpeng's models all use front-looking 8 million pixel binocular cameras, and through 2 NVIDIA Orin-X SoC chips with high computing power, they provide 508TOPS of ultra-high AI computing power to meet the perception and computing power requirements of urban NGP.
In September 2022, Xpeng took the lead in launching urban NGP in Guangzhou, and was the first car company in China to put urban navigation assisted driving on the car. As of January 2024, Xpeng's urban NGP has covered 52 cities in China, including Guangzhou, Shenzhen, Shanghai, Suzhou, Nanjing, Hangzhou, Ningbo, Beijing, Tianjin, Chengdu, Xi'an, Wuhan, Changsha, etc., and is also the most widely available urban navigation assisted driving in China.
XPeng City NGP coverage.
Huawei's urban navigation assistance function is called N**igation Cruise Assist (NCA), which has been installed in the Jihu, AVATR and Wenjie series models, in which Huawei is deeply involved. The ARCFOX and AVATR models are equipped with three LiDARs and Huawei's MDC810 computing platform with 400Tops computing power, and the Wenjie series models are equipped with one LiDAR and Huawei's MDC610 computing platform with 200Tops computing power. As of January 2024, Huawei's Urban NCA has opened up six cities in China: Shanghai, Shenzhen, Guangzhou, Hangzhou, Chongqing, and Beijing.
Coverage of Huawei's urban NCA.
The ideal urban NOA is realized in its L7, L8 and L9 models. All three models are equipped with 1 LiDAR and 2 NVIDIA Orin-X chips, and the front-facing binocular camera and side-view camera have up to 8 million pixels, which is enough to meet the hardware performance requirements of L2 intelligent driving. As of January 2024, Ideal City NOA has been opened in 10 cities in China, including Beijing, Shanghai, Guangzhou, Shenzhen, Hangzhou, and Chengdu.
Coverage of the ideal city NOA.
In its existing models L7, LS6 and LS7, Zhiji will fully equip the urban NOA function, and realize it through one LiDAR and one NVIDIA Orin-X chip with 254TOPS computing power. However, Zhiji's city NOA push time is relatively late (January 2024), and only 1 city in Shanghai has been opened so far.
Tesla's urban NOA is currently only being pushed in North America, and has not yet entered China, although it has been claimed that it will be pushed in China, but it has not yet been seen. NIO and Momo are still in the road testing stage and have not really started to push to users.
Go to the high-precision map, which can't be removed at the moment
At the 2023 Shanghai Auto Show, many domestic OEMs and intelligent driving solution providers have shouted the slogan of "Go to high-precision maps": Xpeng announced that "NGP in urban areas will not need high-precision maps from June 2023"; Ideal propaganda "ad max 30 system gradually get rid of high-precision maps"; Zhiji proposed "data-driven road environment perception model to replace high-precision map"; Huawei bluntly said that "ADS 20 system does not need high-precision maps, and can be opened with and without maps"; As a provider of high-precision maps, it also proposes an intelligent driving scheme of "light map"; The sound of "heavy perception, light map" is emitted.
HD maps. Why do players in the industry have voiced "go to high-precision maps"? There are three main reasons:
First, the existence of high-precision maps makes it difficult for OEMs and intelligent driving developers to improve the freshness of maps due to the fact that they are severely limited by map vendors.
Intelligent driving has high requirements for the real-time data of high-precision maps, that is, the maps need to be "fresh" and even updated once a day. The average update cycle of map merchants is in months or quarters, and the average update cycle in the industry is about 3 months, which makes it difficult for the high-precision maps provided by map merchants to meet the needs of intelligent driving development.
Second, the cost of surveying and mapping of high-precision maps is high and the cycle is long, which improves the development and use cost of the entire intelligent driving system.
Due to the high accuracy and rich information, high-precision maps need to collect a large amount of data, resulting in higher costs and longer cycles than ordinary navigation maps. The high cost and long cycle are seriously inconsistent with the current trend of cost reduction and rapid iteration in the intelligent driving industry, so the removal of high-precision maps has gradually become a trend.
The survey results show that the cost of surveying and mapping of decimeter-level high-precision maps is about 10 yuan per kilometer, and each vehicle can collect about 500 kilometers of road data per day; The cost of surveying and mapping centimeter-level high-precision maps is about 1,000 yuan per kilometer, and each vehicle can collect about 100 companies' road data per day. If the visible accuracy is increased tenfold, the cost and cycle of surveying and mapping will increase exponentially or even exponentially.
Third, thanks to the progress of perception algorithms, the sensor's environmental perception results can replace high-precision maps.
It is precisely because of the wide application of BEV+Transformer that the environmental data detected by sensors, especially cameras, can be used to build real-time local maps, gradually replacing high-precision maps based on prior data.
BEV+Transformer principle.
Now that we have entered 2024, how is the progress of the HD map? Has the industry achieved complete "high-precision map"? The answer seems to be no. Based on the research on the above companies' mapless solutions, as well as the solutions of other companies in the industry, we found that there are currently three main ways to replace the traditional high-precision map:
The first is real-time local mapping, which builds a real-time local map based on sensor information and BEV+Transformer. This method can theoretically completely remove the high-precision map, but according to the opinions of most perception algorithm experts, the current environmental perception effect has not reached the level of the fusion of sensors and high-precision maps, and it can only be said that it is gradually improving, approaching a complete map-free map.
The second is crowdsourcing maps, that is, collecting road information through sold vehicles, integrating data from ordinary users, and drawing maps. This method has been proposed and is being adopted in the past few years, and it is essentially still building maps, but no longer relying on map vendors, but allowing users to help OEMs collect map data.
The third is a lightweight high-precision map, which is a simplified version of the high-precision map, and the accuracy and information are between the navigation map and the high-precision map. This scheme can be understood as a transitional state, which is the result of comprehensively balancing the overall perception and positioning effect and the cost of the map, and the map is still needed.
Through the above analysis, it is not difficult to see that although high-precision maps are a trend advocated by OEMs and intelligent driving developers, they have not yet been fully realized. To go to high-precision maps is more to get rid of the dependence on map vendors; The current environmental perception effect cannot reach the level of complete no picture; At this stage, high-level intelligent driving is still inseparable from high-precision maps, but the way of drawing is changing, and the requirements for accuracy are decreasing.
Go to lidar, the battle of technical routes
Lidar was once known as the "eye of autonomous driving", but the high cost makes most car companies daunting. In recent years, the sound of de-lidar has always existed, and the battle between the technical route of the pure visual perception solution and the vision + lidar fusion perception solution has not stopped.
Point cloud effect of lidar.
Lidar was once a must-have for high-end intelligent driving in China, especially for urban NOA, and the leading intelligent driving manufacturers in China also generally adopted the solution of camera and lidar data fusion to achieve the accurate perception required by high-end intelligent driving. Xpeng, NIO, Ideal, AVATAR, Wenjie ......All are equipped with LiDAR.
At the same time, another voice has emerged, represented by Tesla: a purely visual perception scheme that uses only a camera.
Musk has said more than once: "Human driving only uses the eyes to observe the environment, so according to first principles, a purely visual solution is the right route." Tesla, as a pioneer in intelligent driving, has also confirmed the feasibility of pure vision solutions.
Judging from the current installation of high-end intelligent driving, most models are still equipped with a varying number of lidars, and only Tesla and Jiyue choose pure vision solutions.
As we all know, Tesla's visual perception algorithm has always been in the leading position in the industry, and Jiyue, as a key model, has a pure visual confidence that naturally lies in the accumulation of many years in the field of intelligent driving; Other OEMs and solution providers, we boldly guess whether there is such a possibility: the fusion solution is not only used because the perception effect of the fusion scheme is better, but also because the pure visual perception algorithm is not confident enough, and the lidar is conducive to amplifying the marketing effect of car companies on intelligence, so the high-cost lidar is configured.
Although most of the high-end intelligent driving models in China are equipped with lidar, the cost of vehicle-mounted lidar as a field with high technical barriers and uncertain demand prospects, especially in recent years, has gradually dropped to less than 10,000 yuan, resulting in a very limited number of current first-class vendors, and limited product models, which are very different from cameras, millimeter-wave radars and other sensors.
At present, the main automotive lidar manufacturers are Suteng Juchuang, Tudatong, Hesai Technology, Luminar, Livo, Velodyne, Ouster, Liangdao Intelligence, Tanwei Technology, Beixing, etc., as well as a few solution providers that claim to be self-developed, such as Huawei. At present, the lidar mass-produced on the car is semi-solid-state lidar, and from the perspective of domestic shipments, the lidar of Suteng Juchuang, Tudatong and Hesai is the most widely used.
RoboSense's LiDAR products are mainly based on the RS-LiDAR M series, including M1 M1 Plus M3, etc., M1 is a LiDAR configured on Xpeng's models, and M3 is an ultra-long-range LiDAR that can detect targets at a distance of up to 300M. In addition, Suteng Juchuang also has E1, a blind lidar product, and mechanical rotary lidars such as Ruby Plus Helios Bpearl.
Seyond, formerly Innovusion, divides LiDAR products into Falcon series and Skylark systems, and the Falcon series has been installed on NIO's models, including Falcon K and Falcon Q, two ultra-long-range lidars; The Lark series includes long-range LiDAR Lark E and blind radar Lark W.
Hesai's LiDAR products include the AT128 that has been installed on the Ideal Model, the AT512 ultra-long-range LiDAR with a maximum detection range of 400m, the ultra-thin LiDAR ET25 with a fuselage height of only 25mm, the FT120 blind radar, and several mechanical rotary radar products such as the Pandar QT XT.
Large model on the car, industry consensus
Since the advent of ChatGPT, AI models have become an irreversible trend and are widely used in all walks of life, including intelligent driving. The large model makes end-to-end intelligent driving possible, attracting players from multiple fields across the industry, and has achieved certain results.
AI large models have become the consensus of the intelligent driving industry, and many players have participated in it, among which different types of manufacturers such as OEMs, Tier 1, technology companies, and chip companies have made efforts in different directions to form their own competitive advantages in the era of large models, and have also formed a cooperative relationship with a clear division of labor.
OEMs can directly go to C, and can take advantage of their own large-scale production vehicles to collect a large amount of data on real roads and continuously optimize their AI large models. At the same time, OEMs can take advantage of their advantages and dominant position in the industrial chain to fully integrate upstream and downstream resources. However, the development cost of large models of intelligent driving is high and the cycle is long, which has high requirements for the resource investment and technical strength of OEMs. NIO, Xpeng, Ideal, AVATAR, Geely, BYD, Great Wall, GAC and other companies currently have relevant layouts and applications of AI large models.
As a provider of intelligent driving solutions, Tier 1 mainly makes a fuss at the level of software algorithms in terms of large-scale model boarding. Tier 1 develops vertical large models of intelligent driving through software and algorithms, provides intelligent driving large model services for OEMs, and builds its own data closed-loop system to form a complete set of large model ecology, for example, BEV+Transformer is a large model solution commonly developed and promoted by Tier 1. At this stage, Momo Zhixing and SenseTime are typical representatives of providing intelligent driving models.
In April 2023, Momo Zhixing released the industry's first intelligent driving generative model Drive GPT: Snow Lake Hairuo.
We use RLHF (Reinforcement Learning from Human Feedback) technology and introduce real user data to continuously optimize the cognitive decision-making model. Snow Lake Hairuo can generate multiple scene sequences according to probability, so as to quantify the self-driving behavior trajectory that users are most concerned about, output a clear decision-making logic chain, and realize the tokenized expression of the scene through Drive Language. The Snow Lake Hairuo will be installed on the new WEY Mocha DHT-PHEV model and will be available via the Hpilot 3The 0 system realizes urban NOH (N**igation on highway pilot, navigation assisted driving).
Momo Zhixing released the large model of Snow Lake Hairuo.
SenseTime's AI large model, named Uni AD, can integrate different algorithm modules such as detection, tracking, mapping, and planning into the end-to-end framework of Transformer, so as to integrate different computing tasks and achieve end-to-end processing. SenseTime has also built a closed-loop data system for decision-making and planning algorithms, and established a data-driven decision-making and planning algorithm library, integrating big data with decision-making and planning algorithms, and anthropomorphizing intelligent driving through the processing of large models.
Advantages of SenseTime UNI AD.
The strength of technology companies lies in their abundant AI technology reserves and cloud resources. In the early days, major domestic technology companies have accumulated a certain amount of technology precipitation in the field of general AI, which can be directly transformed into the direction of intelligent driving, from general large models to vertical fields. Technology companies will also use their own cloud resources to build their own cloud servers to provide a large number of cloud resources and supporting services for customers to call, develop and deploy their own AI models. Huawei, Tencent, and Apollo are leaders in this area.
In the era of large models, chip companies still focus on chips. By providing chip hardware suitable for deploying AI large models, chip companies can still occupy an important position in the competition of large models, such as NVIDIA, Qualcomm, Horizon, etc.; At the same time, chip companies usually provide a complete development tool chain that matches the chip for developers to use.
The integration of cabin and driver, the landing is slow at this stage
From the integration of driving and parking to the integration of cabin and driver, integration has become an inevitable trend of intelligent vehicles, and of course, it is also an inevitable trend of intelligent driving. Cabin and driver integration has become a hot topic in the industry and the direction of technology development in the past two years, but the current landing situation seems to be far less fast than the integration of travel and parking.
Cockpit and driver integration, also known as cockpit integration, includes the integration of software and hardware: the integration of software and function is mainly at the software and function level, integrating the software and data of intelligent driving and cockpit, including upgrading software architecture, developing service-oriented architecture (SOA), opening up information and data interaction across domains, and realizing the linkage of intelligent driving and cockpit functions. The integration at the hardware level is mainly the integration of hardware forms, which is an intuitive and visible integration, which will essentially change the underlying software and communication mode, and has obvious BOM cost advantages.
Different degrees of hardware-level cockpit integration solutions.
It can be seen that the integration of cockpit and driver at the software level is more about functions and applications, which is relatively easy to achieve, while the integration of hardware level forms a completely integrated new hardware, and the workload and difficulty of development are greater. At present, there are solutions with different degrees of integration such as one-box, one-board, and one-chip at the hardware level.
The one-box solution integrates the core board of the intelligent driving domain and the core board of the cockpit domain in the same domain controller, but the original software and hardware architecture of the core board does not change, and the inter-board communication scheme remains unchanged. The one-box solution is a bit of a fusion for the sake of fusion, with the lowest level of integration and the lowest difficulty of integration.
The One-Board solution integrates the SoC chip of intelligent driving and the SoC chip of the cockpit on the same core board, but the two SoCs still process the data of the intelligent driving domain and the cockpit domain respectively, but on the same board, they share peripheral hardware such as MCU, storage, and interfaces. The one-board solution can improve the integration to a certain extent, reduce the communication delay, and improve the overall performance of the cabin. In addition, the One-Board solution can effectively reduce the hardware other than the SoC, thereby reducing the BOM cost of the system.
The One-Chip solution processes data in both the intelligent driving domain and the cockpit domain on the same SoC chip, and realizes different functional modules of the intelligent driving and cockpit by running virtual machines on the SoC. The one-chip solution is a true cockpit and driver integration, with the highest degree of integration, which can integrate the hardware of the intelligent driving domain and the cockpit domain to the greatest extent, improve the performance of the cockpit and achieve the greatest cost reduction. However, the integration of the one-chip solution is also the most difficult, and there are few application cases at present.
Although cabin-driver integration has become the consensus of the industry and the inevitable trend of the development of intelligent vehicles, it is difficult to achieve at this stage, and it is difficult to implement a highly integrated cabin-driver integration solution in the short term.
From a technical perspective, for the One-Chip solution, the existing SoC chips cannot meet the requirements of cockpit and driver integration, especially the comprehensive computing power requirements (NPU + GPU) required for cabin and driver integration. In addition, the operating system layout of the One-Chip solution is difficult, and the intelligent driving software is usually based on the Linux system or C++, while the intelligent cockpit software is based on the QNX system or the Andriod system, which is difficult to be well compatible.
For the one-box or one-board scheme, due to the overall volume of the controller, new requirements are also put forward for the installation and layout of the controller. What's more, the number of computing tasks within the same controller increases exponentially, resulting in a significant increase in power consumption and an increase in heat generation, which is also a challenge for the controller's heat dissipation.
From the perspective of the market, at present, whether it is the C-end or the B-side, the willingness to change the integration of cabins and drivers is not strong, nice to h**e, not must h**e. It is difficult for C-end users to have an intuitive feeling of the performance improvement and experience improvement brought by cabin and driver integration, and there is not much difference; The B-end OEMs do not have urgent requirements for cockpit integration, and cockpit integration also involves the integration of the original intelligent driving development department and the cockpit development department, and the merger will produce certain resistance.
From the cost dimension, although in the long run, cockpit integration can effectively reduce the cost of the system, especially the cost of hardware BOM, it will incur a lot of development expenses in the short term, and it is also a question when to see the effect of cost reduction. In addition, the SoC chip in the One-chip solution must be a high-performance chip, and its cost will not be low, which will also increase the investment of developers.
Due to the above-mentioned difficulties, the progress of cabin and driver integration is still relatively slow. Due to the low integration and low difficulty of the one-box solution, a few players took the lead in implementing it, such as Tesla, Xpeng, etc., but it was not popular. The one-board solution has high requirements for the design capability of the core board, and there are very few cases implemented so far. The One-Chip solution relies on high-performance SoC chips that meet the requirements, and it still needs to wait for the progress of chip manufacturers.
At this stage, the cabin-driver integration cases that have appeared in the market are mainly solutions provided by players such as Tesla, Xpeng, ZeroBeam, Jiyue, Desay SV, and Thunderda.
Tesla has planned the one-box cockpit integration solution from the level of the electronic and electrical architecture of the whole vehicle, integrating the three core boards responsible for intelligent driving, intelligent cockpit and internal and external communication into one controller to form a first-class computing unit.
Xpeng proposed a one-box solution similar to Tesla, integrating the XPU of the intelligent driving system, the DCU of the central control system and the ICM of the instrument system into the same domain controller to form an integrated domain controller for cabin and driving, creating its concept of "three-in-one cabin" and applying it to the Xpeng G9 model.
ZeroBeam is one of the earliest solution providers in China to propose the concept of cabin-driver integration. ZeroBeam is fully built with the full stack of Galaxy 30 architecture, at the hardware level, it consists of 2 high-performance computing units (HPC) and 4 regional controller zones, one of which integrates intelligent driving and cockpit functions, but it is also a one-box solution; At the software level, ZeroBeam integrates middleware and SOA atomic service layers, and provides a unified and standardized API (Application Programming Interface) interface to facilitate the scheduling and reuse of algorithms for different intelligent driving and cockpit modules.
Jiyue took the lead in proposing the concept of "true redundancy" in intelligent driving: when the intelligent driving domain control fails, the cockpit domain control can provide simple ACC and other functions to take over and realize the cross-domain redundancy strategy. In a sense, Jiyue has realized the One-Chip solution based on the cockpit SoC chip (Qualcomm SA8295), but the function of intelligent driving is too simple. In addition, Jiyue has also realized the integration of cabins and drivers at the application level (software level), that is, a 3D human-machine co-driving map: through the cross-domain resource scheduling of the intelligent driving domain and the cockpit domain, the perception target results are directly visualized and displayed, providing a virtualized driving experience with restored display.
In April 2022, Desay SV released the Aurora, an in-vehicle intelligent computing platform, which was known as the industry's first mass-produced in-vehicle intelligent computing platform. At the hardware level, Aurora integrates Nivida Orin, Qualcomm SA8295 and Black Sesame's Huashan A1000 chip, which belongs to the One-Box solution; At the software level, Aurora can provide intelligent driving, intelligent cockpit and intelligent network services at the same time to achieve a certain degree of "first-class computing". In addition, Aurora adopts a plug-in structure, which can scale and tailor computing power according to needs to meet diverse computing needs.
Thundersoft has proposed a cockpit and driver integration solution based on Qualcomm series chips, which belongs to the one-chip solution. Thundersoft has two sets of solutions, one is based on the Qualcomm SA8295 chip to achieve cabin and parking integration, that is, the integration of intelligent cockpit and intelligent parking functions, and this solution still needs another SoC to realize intelligent driving functions, so it is not a complete cabin and driving integration; The second is based on the Qualcomm SA8795 chip to achieve complete cockpit integration, but this solution has not yet been mass-produced and is still in the research and development stage.
Judging from the above interpretation content, in the past 2023, the cutting-edge hot topics of intelligent driving have continued, but the actual progress is not as good as advertised: the number of urban NOA openings is limited, the high-precision map has not been completely removed, there is controversy over lidar, the AI model needs to be continuously deepened, and the integration of cabin and driver is difficult and the progress is slow.
We also see that at a time when the competition in the intelligent driving track is becoming more and more fierce, in 2024, players in the industry will inevitably continue to make efforts in these hot directions, strive to occupy more market share by breaking through technical difficulties and reducing costs, and finally land cutting-edge hot spots into civilian applications.