The world-famous writer and great thinker Spencer Johnson once said, "The only constant in the world is change itself". It is true that society is changing at a rapid pace, and things are constantly changing, especially driven by emerging technologies such as AI, 5G, and cloud computing, the entire digital world has become more complex and elusive.
For enterprises, in the face of increasingly complex technology evolution, it is imperative to clarify the right direction of technology development and accelerate digital transformation with the best path, John Roese, global chief technology officer of Dell Technologies, recently made a statement on technology trends for 2024 in an interview.
Generative AI moves from theory to practice
Nowadays, with the wide application of generative AI, a new wave of science and technology represented by artificial intelligence is leading the whole society to the fourth industrial revolution, as a technology that simulates human intelligence, AI can achieve independent decision-making and action through Xi learning, reasoning and self-correction, and plays an important role in healthcare, finance, transportation, education and other fields.
If this year, many companies are doing research and development around the technology itself, then next year, generative AI is expected to move from theory to practice, "This year, we can actually see that although a series of companies, including Google and OpenAI, have launched generative AI technology, not many customers have actually put it into use." However, some of the leading companies are already thinking about how to use generative AI to create value in the future, and by 2024, we will see a real shift coming. As the focus shifts from broad experimentation to top-down strategic priorities, transformative generative AI projects are also poised to emerge. John Roese said.
However, it is important to know that the process of moving from theory to practice itself is also difficult, and now many enterprises have obtained the so-called own large models after tuning and training the basic models, which leads to the fact that when these large models are actually used in the production scenarios of vertical industries, they cannot really become productivity, and in order to change this, the following problems must be solved.
First,Infrastructure for explicit reasoning。Different from the training that requires a large cluster and an accelerated computing architecture, inference as a process of putting the training results into use, its infrastructure mainly depends on how many customers use such models or inference capabilities, if the enterprise has a very mature model but only a small number of customers, one server may be enough, but if the enterprise has only one very simple model and many customers at the same time, then the number of servers required will also increase exponentially, for enterprises, it is necessary to clarify how to deploy the inference architecture.
Second,Determine where your inference infrastructure will be deployed。As we all know, the trained infrastructure needs to be deployed in the data center, but the inference side is obviously closer to the data and users, it may be placed at the edge, or it may be close to the call center or end users, and enterprises need to be clear about the deployment of the inference infrastructure to better make generative AI more valuable.
Third,Secure your inference infrastructure。Unlike the training side, the inference infrastructure needs to leave the data center and reach various places such as the edge, factory, manufacturing center, and transportation network, and enterprises need to ensure that the AI model they create can still be secure when it is put into production.
When generative AI was just getting started, almost every enterprise, every organization, and every functional department in the world was thinking about how to use generative AI to change the way production and business were done, but when it was actually implemented, it was found that while there may be hundreds or thousands of potential use cases for enterprises, due to limited resources, CIOs and leaders had to prioritize among all available generative AI use cases, combined with operational cost considerations, to determine whether to move forward with generative AI projects. John Roese added.
Solidify your Zero Trust framework
In a world of unpredictable and complex cyberattacks, exploring the security landscape is like navigating a complex labyrinth of channels that crisscross the path to reach the destination. Zero Trust simplifies this process.
As a concept that has been around for a long time, zero trust aims to automate an organization's security architecture and orchestrate responses as soon as systems are attacked. The reason why the concept of zero trust is mentioned is because most of today's network security architecture is passive, once a new type of attack appears, the network security architecture will have new solutions, and under the framework of zero trust, all people, devices and apps must be verified, which can actually better protect the critical data of the enterprise.
At the operational level, enterprises should also be aware of the following issues when promoting a zero trust framework:
First of all, zero trust is for an enterpriseIt's not a quick goalEnterprises need to make it clear that it is impossible to easily build a zero trust framework through a single technology or solution.
Secondly,The purpose of full zero trust is to support all of the principles covered by zero trustIf it's not easy to apply to an existing architecture, organizations should consider whether they should put the most risky applications into a zero-trust framework while the rest remain in existing systems.
Again, until now, no one knows what end-to-end zero trust looks like, and for enterprises, needsTake evolving steps toward Zero TrustIf zero trust isn't fully implemented on day one, it's time to develop a strategy to make decisions about every security decision.
Finally, businesses need:Pick high-risk areas for your Zero Trust implementation。For enterprises, they need to start with the weakest link and gradually add to their security barriers, and every small step of zero trust is to achieve ultimate security, and enterprises need to make prudent decisions about how to implement and apply these principles.
Edge scales, multi-cloud prevails
As a transformative technology in the field of the Internet of Things, the edge platform undoubtedly has a broad space for development, and in real life, the data generated by people also exists more in edge nodes that are closer to people, such as factories and hospitals, rather than in data centers, which means that large cloud service providers need to build their own edges for each architecture, such as Google's Anthos, Amazon's Outposts, and so on.
In order to build a more modern edge platform, enterprises can take two ways, one is to expand the edge island, and the other is to build a multi-cloud edge platform, but the future development direction must be the latter, on top of which there are both software and hardware orchestration, and for cloud services, there is no need to build their own platform for each workload, but to have a common platform. Make the modern edge an extension of your multi-cloud infrastructure with an edge platform.
In addition, the edge is also expected to be more deeply integrated with AI next year, because a large number of AI will not be applied in the data center but in the production and operation activities of enterprises, and if you want to use it in an efficient and secure way, then it must be at the edge. "The communication between machines and machines tends to have lower latency, if the data is imported from sensors or real systems, then its reaction time will be faster, taking a train company in the United States as an example, they have deployed a large number of cameras across the United States to view the operation status of each fork, which is beyond the reach of manpower, the use of camera functions in AI scenarios requires enterprises to know in real time that security problems may occur, and real-time monitoring of safety system management in enterprises is the case of AI application scenarios。 John Roese said.
In addition to the above three technical prospects, John Roese also shared the possibility of combining quantum computing with generative AI, which is different from conventional computing, which has the ability to process data infinitely and can choose the most likely outcome in almost unlimited data answers, while generative AI is also a type of probabilistic computing, and quantum computing can exponentially increase the efficiency of generative AI after optimizing generative AI, so that generative AI can do more.
From IT security to quantum technology, from artificial intelligence to edge computing to cloud computing, our digital world is evolving and expanding at an unprecedented rate. For enterprises, although they still face various challenges, they can better create value from their data after understanding future technology trends.