If you want to ** paragraph ** on *** and extract the text in **, and then use a translation service to convert these words from English to Chinese, what do you need to do?
With the help of Amazon Q, the new Amazon Web Services generative AI assistant, users only need to ask for a simple natural language instruction, and Amazon Q can help us generate the corresponding **, through which we can complete a series of operations. This is just one of the use cases for Amazon Q. Chen Xiaojian, general manager of the product department of Amazon Web Services Greater China, said.
Recently, at the Amazon Web Services 2023 Re:Invent Global Conference, Amazon Web Services announced a full focus on generative AI, launching a series of new services and features for enterprise-level generative AI, including Amazon Q, a new generative AI assistant that reshapes the way of working in the future, Amazon Bedrock, more model choices and new powerful features, and Amazon SageMaker's five new features to help you build and apply generative AI more easily and securely, including five new features that enable you to develop application models at scale.
According to reports, Amazon Q is an expert of Amazon Web Services, trained by the knowledge and experience accumulated by Amazon Web Services for 17 years, and can answer various Amazon Web Services-related professional questions raised by customers in a variety of interfaces. As a new generative AI-enabled assistant, Amazon Q can be customized to meet the needs of office scenarios. Quickly get relevant answers, generate content, and act on complex questions based on insights from your organization's own information repositories,** and enterprise systems.
Amazon Q is currently available in preview, Amazon Q in Amazon Connect is generally available, and Amazon Q in Amazon Supply Chain is coming soon.
In addition, Amazon Bedrock has released more model choices and new capabilities to securely build and scale generative AI applications. It can also further lower the barrier to entry for generative AI applications, providing more industry-leading model selection and new capabilities for evaluating models to simplify the way models are customized with relevant and proprietary data.
Amazon SageMaker's five new capabilities make it easier and faster for organizations to build, train, and deploy machine-based Xi models that support a variety of generative AI use cases. New features include Amazon SageMaker HyperPod to accelerate base model training at scale, reduce training time by up to 40%, and ensure uninterrupted training for weeks or monthsAmazon SageMaker Inference reduces deployment costs by an average of 50% and inference latency by 20%.Amazon SageMaker Clarify can help organizations evaluate, compare, and select the best modelsTwo enhancements to Amazon SageMaker Canvas – preparing data with natural language instructions and leveraging models for large-scale business analysis – will make it easier and faster for businesses to integrate generative AI into their workflows.
In the context of the explosion of large models, it has become the focus of current enterprises to innovate business with the help of generative AI technology and quickly win competitive advantages. Chen Xiaojian believes that in the future, there will be many industries with broad prospects for combining generative AI. "From the perspective of generating macro value, generative AI will create new customer experiences for enterprises, improve the productivity of employees within enterprises, help enterprises improve the efficiency of business operations, and improve the efficiency of enterprises in content creation. Therefore, for any enterprise, choosing a suitable scenario and a suitable model is the first step in generative AI innovation. ”
At the same time, the quality of the data directly affects the quality of the model, and it is also the key to the performance of generative AI. Building a strong data "foundation" for generative AI requires high-quality, diverse, and reliable datasets from multiple sources.
To further enrich the selection of vector databases, ensure that business data and vector data are synchronized to support generative AI. Amazon Web Services has launched the Amazon OpenSearch Serverless Vector Engine, a new vector search feature for Amazon DocumentDB and Amazon DynamoDB, and a preview of Amazon Memory DB for Redis vector search to improve the responsiveness and latency performance of generative AI applications. Amazon Web Services also officially launched Amazon Neptune Analytics, a graph database analytics engine that helps apps like Snapchat graphically analyze billions of connections in seconds.
In terms of data governance, Amazon Web Services has launched a preview of the AI Description Suggestions feature for Amazon DataZone, which can automatically generate more understandable business descriptions for enterprise datasets and provide recommendations for the use of the datasets. Amazon Web Services also launched a preview of Amazon Clean Rooms ML to help enterprises and their partners apply machine-based Xi models on aggregate data without having to copy or share raw data with each other, and launched the first model specifically to help companies create similar segments for marketing use cases.
The entire generative AI application is like an iceberg floating on the surface of the sea, and the basic model is only the tip of the iceberg that can be seen above the surface of the sea, and at the bottom of the iceberg, it also needs a large number of services other than the basic model to support, such as acceleration chips, databases, data analysis, data security services, etc. Chen Xiaojian said.
Chengfeng Wang, Research Director of iResearch, said: "This Amazon Web Services Re:Invention conference continues to innovate in the field of generative AI, while upgrading and iterating on basic products. Among them, the impressive main ones include: comprehensively deepening serverless, promoting the 'serverless' of all products such as databases, data analysis, and AI, and continuing to maintain the layout advantages in cloud native;We will continue to provide strong capability support for the development of generative AI, including not only more advanced computing resources brought by cooperation with chip vendors such as NVIDIA, but also a capability platform for 'select' multi-model access through Amazon Bedrock products, with more emphasis on ecological construction in the field of generative AI." ”