**: Zhitong Finance App
Following the rumors a few days ago, Nvidia (NVDAUS) will enter the custom chip business, and the artificial intelligence boom has pushed Nvidia to a new peak.
According to reports, as the Nvidia share price reached 734An all-time high of $96 and the company's market capitalization reached 1$82 trillion, while retail giant Amazon has a market capitalization of $1$81 trillion, which means the company jumped to the fourth-largest market capitalization company in the United States, just 1The $8.7 billion market capitalization is also just one step away.
*Note that the last time Nvidia surpassed Amazon in market capitalization was in 2002, when both companies had a market capitalization of less than $6 billion.
Because the market's bet on strong AI demand has propelled Nvidia's stock price to become the best performer of the so-called "Big Seven," soaring 223% in the last 12 months. So far this year, NVIDIA's stock price has been 46%.
But for this chip giant, which may be the strongest ever, Nvidia's journey to making history seems to be far from over.
Artificial intelligence has transformed the data center and NVIDIA.
NVIDIA was founded to revolutionize 3D computer graphics technology in gaming and multi-field. The company initially had success with a variety of chips, followed by a major leap forward in 1999 with the introduction of the world's first graphics processing unit (GPU), the NVIDIA GeForce 256.
This milestone culminates in the latest GeForce RTX 40 Series, which delivers photorealistic graphics for digital content with Deep Learning Super Sampling (DLSS), an incredible innovation from NVIDIA. DLSS uses artificial intelligence (AI) to create additional frames and enhance image quality in game scenes.
Until fiscal year 2022 (ending January 30, 2022), gaming has been NVIDIA's biggest revenue driver. The division generated sales of $12.5 billion for the year, accounting for 46% of the company's total revenue. But then, everything changed:
Data centers used to be where companies stored valuable information, but have since evolved into centralized centers for operations (also known as cloud computing). Today, data centers have powerful chips designed by NVIDIA to handle AI workloads.
This shift began in 2016 when NVIDIA delivered OpenAI's first AI supercomputer, which was used to develop early generative AI models that eventually led to the famous ChatGPT chatbot.
NVIDIA's leading H100 data center GPUs are now available for up to $40,000. Centralized data center operators such as Microsoft and Amazon have ordered hundreds of thousands of data centers to provide cloud customers with the computing power they need to develop AI.
This led to a 279% year-over-year surge in NVIDIA's data center revenue in the third quarter of fiscal 2024 (ending October 29, 2023). The data center business currently accounts for 80% of NVIDIA's total revenue, leaving the gaming business far behind.
nvidia is now a value 1An $8 trillion behemoth, of which $1 trillion in value has been created in the last 12 months. For now, it seems that Nvidia's stock price may still move higher.
Every country needs AI, and data centers will continue to soar.
NVIDIA founder and CEO Jensen Huang told attendees at the world's summit in Dubai on Monday that every country needs to have its own intelligence production capacity.
Speaking during a fireside chat with the UAE's Minister of Artificial Intelligence, H.E. Al Olama, Huang described sovereign AI (emphasizing a country's ownership of its data and the intelligence it generates) as a huge opportunity for world leaders.
It records your culture, your social wisdom, your common sense, your history – you own your own data," Huang told Al Olama during their conversation, which was the highlight of the event attended by more than 4,000 delegates from 150 countries.
Huang urged leaders not to be "fooled" by AI. The unprecedented ability of AI to be instructed by ordinary humans makes it critical for countries to embrace AI and infuse it with local languages and expertise.
Huang even pushed back against the advice made over the years by many visionaries who urged young people to study computer science in order to compete in the information age.
In fact, it's almost the exact opposite," Huang said. "Our job is to create computing technology that doesn't require anyone to program, and programming languages are human: now everyone in the world is a programmer, and that's the miracle. ”
At the same summit, Huang also said that we're at the beginning of this new era, and what is going to happen is that the installed base of data centers around the world will be worth a trillion dollars, and in the next 4-5 years, we'll have 2 trillion dollars worth of data centers, and they're powering software around the world, and all of that will be accelerated, and this accelerated computing architecture is a great fit for the next generation of software called generative AI.
For NVIDIA as a commercial company, Jensen said that "general-purpose computing" is not the best way for us to want fast, efficient, and cost-effective AI, which he describes by portraying the fact that accelerated computing that we see in modern times has contributed to the growth and even entry of AI into the market. He said the only way for the industry to transition to a "next-generation" state is to upgrade to accelerated computing, which requires huge economic resources and efficient hardware as key**.
Enter the custom chip design business, winner takes all.
In NVIDIA's initial business plan, they hoped to use a unified GPU to win all customers. But now they're seeing customers flee their own chips. As mentioned at the beginning of the article, there are rumors that Nvidia is entering the custom chip business, hoping to further strengthen its low position in the AI market by customizing chips for their customers.
The group was reportedly set up by NVIDIA to create new business models to help customers build their own solutions using NVIDIA IP and even chiplets. With this move, Nvidia began to build an AI licensing giant.
Readers familiar with the silicon industry should know that many companies that design their own chips to reduce costs or provide more customized solutions for computing needs are already working with companies such as Broadcomm and Marvell on back-end physical design, Serdes blocks, or IP (such as Marvell's high-performance ARM CPU cores). EDA solution providers such as Cadence and Synopsys have done a great job of providing IP blocks that SoC designers can put into their chips, saving money and speeding time to market. But this is not new news. For example, simaAI uses image processors from Synopsys in its edge AI chips.
Tenstorrent, a startup led by Jim Keller, saw the opportunity and transformed the Toronto- and Austin-based company from a potential competitor to NVIDIA into an IP and design store that provides chiplets and intellectual property to companies like KIA and LG.
And in the field of AI, we are seeing a new trend where designers of TVs, cars, or networking devices want to build custom solutions to reduce costs or provide differentiated solutions that include AI, but they don't have the need or expertise to build the entire chip.
As for big customers such as Google, Amazon AWS, Meta (which is expected to use its own chips later this year), and Microsoft Azure, they already have their own custom chips for in-house AI as well as NVIDIA GPUs for cloud customers. Can they work with NVIDIA on future designs?
We can hypothesize that these NVIDIA custom silicon customers will be able to leverage NVIDIA's in-house and AWS supercomputers to accelerate and optimize these design efforts? This would be a nice additional income and an incredible differentiator. If so, that's probably why NVIDIA hosts its latest "in-house" supercomputer, Project Cieba, in AWS data centers, where the infrastructure for secure cloud services is already provided. NVIDIA can provide chip design optimization services on CIEBA.
While this speculation may be a bit too much, doing so shows that Nvidia sees a bad omen and is ready to change the industry again.
Although this speculation is a bit bold, it is inevitable that all technology will be commoditized over time. Especially previous generations of silicon. When NVIDIA was interested in acquiring ARM, I thought the acquisition would make it possible for NVIDIA to monetize products they didn't want to produce through licensing agreements.
It looks like that's exactly what NVIDIA is doing right now.
In response to Sam Altman, seven trillion can buy them all.
For the AI chip industry, one of the recent hot spots, of course, can't avoid the rumors that Open AI CEO Sam Altman plans to raise $7 trillion in funds to disrupt AI chips and chip manufacturing.
First of all, we must say that this is a lot of money.
Secondly, it certainly won't be an easy thing to do. Regardless of the difficulty of advanced chip manufacturing as a whole, after years of development, only TSMC, Samsung and Intel in the world have been able to enter the leading chip manufacturing market.
What's more, it costs $10 billion to invest in an advanced fab (a drop in the bucket compared to seven trillion?). )。What's more, industry executives say there is uncertainty about finding engineers to operate a large number of new factories, getting machines to fill them, and getting enough orders to justify those plants.
Even the massive construction of new chip factories won't necessarily solve Altman's near-term problem — the shortage of AI chips needed to produce systems like OpenAI's ChatGPT. The biggest bottleneck in Nvidia's AI chip production is packaging, which is the manufacturing step after the circuit is imprinted onto the silicon wafer.
Sam Altman also complained about the cost of Nvidia's chips — Raymond James analyst Srini Pajuri said adding chip factories might not solve another problem.
In order to reduce the ** of AI chips, we need to compete more with Nvidia," he said.
Nvidia CEO Jensen Huang is skeptical of this crazy plan.
At the world's first summit in Dubai, the world's most valuable chipmakers said that advances in computing technology will make it significantly cheaper to develop artificial intelligence.
You can't assume you'll buy more computers. You also have to assume that computers are going to get faster, so the total amount you need won't be that much," Huang said. "If you just assume that computers aren't going to get faster, you might come to the conclusion that we need 14 planets, three galaxies, and four suns to fuel it all, but computer architecture is still advancing," he noted. ”
He said he believes the chip industry will reduce the cost of AI because its components are being manufactured "faster and faster".
Thus, Huang's view suggests that better, more cost-effective chips would make Ultraman's ambitious investment plans unnecessary.
"If you just assume that computers aren't going to get faster, you might conclude that we need 14 planets, three galaxies, and four suns to fuel it all, but computer architecture is still advancing," Huang said. ”
When asked whether the next era of AI will be built on GPUs (Nvidia has about 80% of the GPU market share) or some kind of new technology, Huang noted that many other major tech companies are indeed developing their own technology. A proprietary chip that can be used as a GPU replacement. Microsoft is developing MAIA, a custom silicon chip developed specifically for training large language models. At the same time, Google is developing tensor processing units (TPUs) designed to accelerate machine learning workloads. Meta is reportedly working on its own in-house chips.
One of the reasons Nvidia's approach to AI differs from its potential competitors is that its GPUs can be used by "anyone, on any platform," Huang said is part of his ambition to "democratize AI." Huang went on to claim that Nvidia is present "in every cloud and data center, all the way down to self-driving systems and self-driving cars."
As new ways to build AI systems are invented, Huang said Nvidia will be able to adapt flexibly. "All of these architectures can be created on top of NVIDIA's flexible architecture, and because they're almost everywhere," Huang said, "any researcher can access NVIDIA's GPUs and invent the next generation." ”
When asked how many graphics processing units (GPUs) he could buy for $7 trillion, Huang replied with a smile: "Obviously all GPUs".
This article is compiled from "Semiconductor Industry Observation", Zhitong Finance Editor: Zhang Jinliang.