Burst!The era of programming with mouth has come, and Comate has swept more than 8,000 companies

Mondo Entertainment Updated on 2024-01-31

Imagine if you were asked to program with the "punch paper tape" below today, would you be okay with that?

The earliest form of computer programming was the use of machine language, that is, the use of binary ** to directly control every operation of computer hardware.

Assembly language uses mnemonic instead of binary instructions, which makes programming easier, but still requires an understanding of the hardware device.

High-level languages, on the other hand, provide a more advanced level of abstraction, allowing humans to write programs using syntax that approximates natural language, relying less on knowledge of the underlying hardware, while also being able to handle more complex programming tasks. Among high-level languages, programming languages are divided into "three, six, nine, etc.", for example, "PHP is the best programming language in the world" (manual dog head).

However, no matter how "advanced" a programming language is, it is ultimately a bridge between humans and machines. In the human world, when two people communicate, what they hope most is that the other party uses the same language as themselves, so the communication cost is the lowest, and it is not easy to cause misunderstandings and information loss.

Therefore, in the opinion,With the development of AI, the best form of communication between humans and machines in the future is destined to be native natural language。In the future,High-level programming languages may be relegated to history museums like punches

The convening of the W**E Summit+ 2023 Deep Learning Developer Conference provides strong evidence for this view.

Comate Autowork is introduced

The era of "programming with your mouth" has really come

Comate Autowork was officially launched at the conference

You write the requirements document, and comate thinks about the rest, disassembles the requirements, and executes the tasks to complete the generation.

I believe that in the near future, even the first generation link will no longer have to be exposed to programmers, and comate will directly deliver the product of requirements - the ultimate form of programming with mouth.

At present, although the development level of AI is not enough to replace high-level programming languages with natural languages, it is clear that natural languages have begun to penetrate into scenarios that were originally 100% implemented in programming languages in the future.

Seeing this, you may be thinking, can't I also do this process with GPT-4?

Not to mention the "GPT-4" lazy and stupid incident exposed a while ago, if you really get started with Comate, you will find that an AI agent-driven IDE specially designed for programming scenarios has too much experience and productivity compared to a simple large model!

In addition to the most subversive "programming with mouth, directly turning requirements documents into software delivery" shown in the above**, here are a few simple examples of features that are particularly commonly used in comate:

Explanation**:

Smart Completion:

This is not a simple syntax completion, but an intelligent completion that goes deep into the business logic:

**Optimization:

Comate can optimize the company, avoid potential vulnerabilities that are difficult to find, and make the company more robust and stable.

In summary, a graph can be used to describe comate autowork:

Comate Autowork penetrates into the whole link of R&D, developers only need to clarify the goals and requirements, and the follow-up R&D process such as requirements disassembly, planning, generation, debugging and running can not only be executed sequentially, but also any step in the middle can be separated separately and seamlessly integrated into the developer's existing library workflow.

You may wonder how much efficiency gains this new paradigm of software development will bring

The following is a very frequent software demand - experience coupon coupon ** wheel.

In the past, to develop such a requirement from scratch, it took at least a senior programmer a day or even a few days to develop such a requirement. If you use comate autowork for development, the answer is -2 minutes

The development efficiency has increased by hundreds or thousands of times, which is really not a gimmick, but something that is really happening in the era of large models in which we live. Embrace the era of large models, and you can become the first batch of developers to enjoy the dividends of this era.

In addition, at the W**e Summit+ conference, it was officially announced that Comate Autowork was open for testing. Developers can apply for a trial directly at the comate official website.

Comate Autowdor Portal:

The technical principle behind Comate Autowork

Comate Autowork is so powerful, how is the technology behind it realized?

We all know that for the current large model, it is not difficult to just complete and generate each item individually, but this is far from supporting AutoWork's amazing performance in the whole link of research and development.

Behind the amazing Comate Autowork is the intelligent retrieval technology of large model thinking chain capability + RAG technology.

Based onWith the strong chain of thought ability of the large model, AutoWork can think and perform tasks like an agent, be able to understand people's needs, and then sequentially execute the steps of requirements disassembly, planning, generation, debugging and running, becauseAutowork is able to develop a program for developing a coupon turntable in minutesNot surprisingly.

But the agent's chain of thought ability is not enough, we all know that under this generation of technology, the current large language model is generally hallucinating, this problem is simply fatal to programming, after all, the program execution**, the fault tolerance rate is 0, not to mention a whole line**, even a letter, a colon can not be wrong.

And in order to ensure that ** will not go wrong, this is indispensableRAG Intelligence** Retrieval Technology

RAG retrieval augmented generation is simply the process of supplementing the knowledge base information entered by the user into a large language model. The large language model can then retrieve this knowledge base information to enhance the responses it generates when the user asks a question.

Based on RAG technology, COMATE has specially designed and developed RAG intelligent retrieval technology for the programming field, which can allow users to retrieve the most relevant content that may answer questions from the ** library, so as to solve problems such as LLM illusions.

In addition, after using comate, you will find that its response is very fast, and behind this, is a series of black technologies that PaddlePaddle has improved the inference efficiency of Wenxin's large model. The W**e Summit+ is no exception. For example, multi-stream parallel operator scheduling effectively reduces the time consumption of hardware waiting and the blocking of runtimeSignificant optimizations have been made in terms of custom operators and custom fusion policies. Continuous optimization of large model inference efficiency is the backing guarantee for COMATE to provide developers with a silky development experience.

Comate has swept 8000+ businesses

It is worth mentioning that after the launch of the Comate SaaS version of the service in October, it has been used8000Used by businesses around the world.

In addition, the new open customization capability of COMATE has been released, and enterprises can customize intelligent R&D capabilities according to individual needs. By docking private domain knowledge, Comate can better understand the business, and you can also use its own library to fine-tune the model to create an intelligent assistant that is more suitable for each enterprise.

Today, Comate can also connect to third capabilities through plugins, and the first support includes Gitee GitHub, Gitlab, Postman, Jira, etc.

By the way, regarding one of the core technologies behind Comate - Wenxin Yiyan, CTO Wang Haifeng disclosed at the conference that the number of users of Wenxin Yiyan has exceeded 100 million!

This figure is the achievement achieved since Wenxin Yiyan was allowed to open its services to the public on August 31, and it took only four months, which is also the first large-scale model product in China with more than 100 million users.

Open the era of AI native applications

Behind this major upgrade of comate and the breakthrough of Wenxin Yiyan's 100 million+ user scale, isUnwavering R&D investment in AI and emphasis on AI-native applications

Since 2019, we have been deeply engaged in the research and development of pre-trained models, and released Wenxin Large Model 10。After four years of accumulation, in March this year, it took the lead in releasing Wenxin Yiyan, a knowledge-enhanced large language model, among the world's major technology companies. In October, the base model of Wenxin Yiyan was upgraded to 40, the four basic capabilities of artificial intelligence, comprehension, generation, logic and memory have been comprehensively improved. And Wenxin large model 40 In the past two months, the overall effect has increased by 32%. The continuous improvement of the base capability of Wenxin large model is an important reason why it can be strongly supported in the development tool chain.

And as early as the World Congress in October this year, accompanied by Wenxin 40, launched a series of AI native applications empowered by large models. Robin Li, founder, chairman and CEO, predicted at the conference: ".We will enter an era of AI native

In December, Robin Li emphasized again at a conference: ".With the advent of the era of large models, the real value lies in the native application, and we have to roll up the AI native application to make this valuable. Stop rolling up big model progress, it's not an opportunity for most people.

The emergence of intelligence brought about by large models is the basis for the development of AI native applications. "The Wenxin model is in 4The evolution of comprehensive capabilities in the 0 era is to lay the foundation for the arrival of an intelligent era in the future. In addition to the Wenxin model 4In addition to 0, it has also launched more than 10 applications based on the basic model, such as search, GBI, such as streaming, library, network disk, map, etc. These AI-native applications based on the basic model, as well as the new black technology Comate Autowork, which can develop a new program in two minutes released yesterday, are undoubtedly proof of occupying the high ground of the AI native era.

Finally, yesWe know the importance of serving developers well - developers win the world.

The era of AI-native applications is destined to start with AI-native Comate and be opened by developers in the new era who focus on AI-native.

Article**Zixi Xiaoyao Technology said.

Related Pages