Focusing on the Amazon Web Services re Invent re Cap session, reimagining the infinite possibilities

Mondo Technology Updated on 2024-01-31

Abstract: From December 14th to 17th, the 12th Global Software Case Study Summit (TOP100SUMMIT) was successfully held at the Beijing International Convention Center. Wei Xing, senior technical lecturer of Amazon Web Services, as the lecturer of "Amazon Web Services re:invent re:cap", gave inspiration for sharing and practice in generative AI.

First of all, Zheng Yubin brought you the dry goods sharing of "Generative AI-Driven Developer-Centric AIOPS Optimization". She elaborated on the development and practice of DevOps, the opportunities and challenges of AIOPS, the full realization of AIOPS with the new AI technology launched by Re:Invent, and the future outlook of AIOPS.

Amazon Web Services Senior Developer Evangelist Zheng Yubin.

Today, enterprises have a wide range of model options to power their generative AI applications. To strike the right balance between accuracy and performance in specific use cases, organizations must effectively compare models and find the best option based on their preferred metrics. For each new scenario model, these subjective comparisons require time, expertise, and resources that limit the use of generative AI by enterprises.

The generative AI technology stack consists of a three-layer architecture, with a bottom-up infrastructure layer, a basic model service layer, and an AI application layer. The lowest layer of infrastructure has the familiar use of GPUs, as well as Amazon's own chips developed specifically for training and inference, as well as other infrastructure.

Amazon's homegrown model, Amazon Bedrock, uses automated or human assessments to evaluate, compare, and select the best model for their specific use case by selecting the best-fit model from industry-leading foundation models such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Stability AI. With Agents for Amazon Bedrock, customers can improve accuracy and accelerate the development of generative AI applications.

Talking about this year's re:Invent tool update, Zheng Yubin introduced Amazon Q, a generative AI job assistant tailored to the business that can be used for auditing and observing Amazon CodeWhisperer and for optimizing and improving efficiency, which can interpret logic and help developers complete daily software development tasks. With just a few clicks, you can go from an idea in a question to a fully tested, merged, run**, and available with natural language input.

At the end of the sharing, Zheng Yubin said that AIOPS will change the way developers work through advanced maintenance, data-driven development, and operation and maintenance decision support in the future.

Xiao Yu, Application Scientist of Amazon Web Services Solution R&D Center, brought a wonderful topic of "Building a Cloud Engine for Diffusion Model AI Generating Graphs and Focusing on Business Innovation".

Xiao Yu, Application Scientist of Amazon Web Services Solution R&D Center.

Xiao Yu first shared a case study of a customer who wanted to quickly expand the functionality of an existing app by introducing generative AI technology, so as to achieve more business innovation. To this end, the function of generating stylized images from the stable diffusion model was tried to increase the attractiveness and interactivity of the app. However, it encountered challenges such as limited local computing resources, high self-construction costs (hardware cost + maintenance cost), insufficient stability of building based on open source projects, inability to ensure the normal operation of the business, lack of back-end API invocation of elastic resources, difficulty in developing 2C business, and difficulty in keeping up with the speed of function iteration of open source communities.

In order to help more enterprises land their biograph business, Amazon has developed a solution on the biograph cloud (which will migrate local workloads such as model tuning and inference to the cloud with low migration costs;One-click deployment of CloudFormation templates on the cloudMeet the two call modes of UI and API, and meet the needs of two usersProvide elastic cloud resourcesProvides multi-user rights management and resource isolation). And by extending to the cloud with an extension and a middleware, using SageMaker as a computing platform, this method can save 80% of the deployment time and improve the work efficiency by 10x.

Then, Xiao Yu introduced in detail the core functions included in the Amazon solution, multi-user permission configuration, elastic configuration of computing resources, data asset management, and the underlying AI graph framework (supporting two ways of direct call of UI and API, flexible and diverse;).Support multi-user function, support customers to build an internal media asset system more conveniently;It is suitable for multiple scenarios of 2B and 2C), and accelerates GenAI innovation by focusing on business needsIt is hoped that these implementation practices can help enterprises better accelerate the speed of preliminary research and model verification of customers' AI mapping business, improve the efficiency of customers to build their own AI mapping platforms and tools, reduce the hardware requirements for customers to carry out AI mapping business, and help customers manage assets related to generative AI business in a standardized manner.

Immediately afterwards, Cao Linjie, the head of the product of Can Technology, and Mo Ziyuan, the solution architect of Amazon Web Services, jointly brought the ".The AI technology behind the popular cloud-native child companion robot is revealedThe theme of sharing,The topic mainly revolves around the first acquaintance with the popular cloud-native child companion robot, the challenges encountered in the process of productization of intelligent cute pets by the technology, and with the help of Amazon Web Services products, the robot can freely express emotions through AI, technology empowerment, optimization and iteration, and continuous improvement of product experience.

Can be the person in charge of technology products, Cao Linjie.

Mo Ziyuan, Solution Architect of Amazon Web Services.

It has a new category of five-in-one robots (man-machine multi-depth interaction, smooth and flexible movement, rich "sense of life" design, all-round and intelligent perception and expandable diversified functions).

In the development of the Loona project, several key challenges were encountered, one is the engineering design and development of the robot, the second is the design and development of the emotional interaction model, and the third is the layout of the global market (complete security and compliance, convenient development and operation and maintenance, global coverage guarantee, product intelligence, and partner assistance). Combined with the conversation, interaction and generation capabilities of large models, how can loona achieve stylized dialogue, emotional understanding and rich functions? Mo Ziyuan, solution architect of Amazon Web Services, gave a solution, but also encountered many technical challenges in the process.

In response to these problems, Mo Ziyuan said that in order to make Loona audible, Amazon Lex based on the Alexa technology stack was used to provide ASR speech-to-text and NLU semantic understanding capabilities in a cloud-native way, and Bedrock generative AI large language model was used to assist LeX in rapid development, and the serverless lambda service was natively integrated, which greatly improved the efficiency of developers and completed Loona in just one month R&D, deployment, and global delivery of conversational interactive functions;At the same time, Amazon Kinesis Video Streams supports Loona two-way voice** real-time calls, so that Loona not only gives people emotional companionship, but also has a remote monitoring function that meets security and privacy standardsIn addition, Amazon Polly has been added to complete TTS text-to-speech, so that the robot can not only "understand", but also respond to the user in voice interaction through natural languageIn the future, we will also explore more smart pet scenarios based on the large language model capabilities of Amazon Web Services.

Sun Xiaoguang, head of Pingcap Tidb Serverless R&D, brings you the titleThe evolution of Tidb serverless's cloud-native architecture: from 0 to 20,000+ clustersThe topic was popular all over the audience.

Xiaoguang Sun, Head of R&D of Pingcap Tidb Serverless.

With the increasing popularity of cloud-native development models, serverless services are becoming the first choice for more and more developers when choosing technology. With serverless services, you don't have to be bound to a specific technical architecture. Users can no longer care about the infrastructure and serve the diverse workload requirements of various scenarios with a high input-output ratio.

Tidb Serverless is a fully serviced database product on Tidb Cloud. In just 400 days after its general availability, it gained a large number of users and is currently providing high-quality services to more than 20,000 user clusters.

One of the reasons behind the rapid growth of Tidb Serverless users is that it has always followed the core requirements of the target customers of Serverless products, and closely followed the demands of this clear customer group in terms of product capabilities and design concepts. Thanks to the solid foundation that TIDB has laid in the past. Tidb Serverless is fully compatible with MySQL, allowing users to continue to use the familiar technology stack and toolsSeamlessly elastic to handle business growth and traffic surgesEnsure business continuity with natural high availability capabilities and provide zero-downtime service capabilitiesIt can also help enterprises gain real-time insights into their business with built-in HTAP capabilities. On the premise of having all these advantages, it still greatly reduces the user's cost by paying on a pay-as-you-go basis. In addition, Tidb Serverless is commercially limited and free. This provides a zero-cost database service for a large number of innovative products in the early stages, enabling more innovations to be born.

From a customer perspective, serverless databases have all sorts of good things. However, for service providers, the serverless road to transaction databases is not always smooth. Sun Xiaoguang mentioned that serverless databases face challenges such as automatic elastic scaling and cold starts. In addressing these challenges, Sun emphasized that Amazon Web Services' innovation leadership, including its advanced technologies in elastic resource services and serverless products, provides a solid foundation for tidb serverless. Leveraging Amazon Web Services products and services, such as Amazon EKS and Amazon S3, PingCap successfully refactored the TIDB architecture on the cloud to become a true serverless database service. In addition, the channel advantage of Amazon Web Services Marketplace has also made an important contribution to the rapid growth of Tidb Serverless customers.

Finally, Sun Xiaoguang believes that serverless databases are a key component of the cloud product matrix as cloud native technology enters the next stage of development. With the continuous iteration and maturity of the product, serverless databases will have a wider range of application scenarios.

Wei Xing, senior technical lecturer at Amazon Web Services, shared the topic of "Introduction to Generative AI for Business Technical Decision Makers".The topics focused on Introduction to Generative Artificial Intelligence (AI): The Art of the Possible, Planning Generative AI Projects, and Building a Generative AI-Ready Enterprise.

Wei Xing, senior technical lecturer at Amazon Web Services.

Explaining the differences between generative AI and machine learning, Wei said, "Generative AI is a subset of deep learning because it can adapt models built with deep learning without retraining or fine-tuning. Whereas, deep learning uses the concept of neurons and synapses, similar to how our brains connect. Generative AI is a type of AI that can create new content, including conversations, stories, images, and images.

Amazon Web Services implements generative AI, which is mainly divided into four layers, the lowest layer is the chip layer, one is the trainium called the accelerator, and the ability to rent chips according to the floating-point inferentia, the second layer is the platform-level service SageMaker that helps implement almost all machine learning scenarios, and the first layer is Bedrock.

Today, commercial use cases for applying large language models include healthcare, life sciences, financial services, manufacturing, retail, **, and entertainment. But it also has risks such as legal, social, and privacy issues. He then introduced the technical fundamentals and terminology related to generative AI, and planned a generative AI project (defining the scope, selecting the model, adapting the model, and using the model).

Finally, he introduced that if you want to build a large language model project in an enterprise, you need to start with culture, ensure that team members understand generative AI, and solve employment problems. It is also necessary to position teams for the success of generative AI and establish a governance model for generative AI.

This special forum came to an end with heated discussions and unfinished ideas. In the future, Amazon Web Services will continue to focus on customer needs, continue to deepen artificial intelligence and machine learning technologies, and continue to innovate and reconstruct, and is committed to providing enterprises with responsible AI applications to help them meet challenges, reshape their businesses, and accelerate their generative AI journey.

Related Pages