Use Coze to build the TiDB assistant

Mondo Technology Updated on 2024-02-17

Reading guide

This topic describes the process of using the Coze platform to set up the Tidb Document Assistant. By comparing different AI bot platforms, the advantages of Coze in terms of plug-in capabilities and ease of use are highlighted. The article discusses the implementation principles in depth, including key concepts such as knowledge base, function call, embedding model, etc., and finally successfully demonstrates how to quickly create a Tidb Help Bot on the Coze platform.

This article was written by Weaxs, a community evangelist at TIDB.

At present, there are many platforms and applications for building AI bots on the market, such as Langchain, Flowise, Dify, FastGPT, etc. Byte has also launched Coze before, tried Dify and FastGPT before, and now feels that Coze has a lot of plug-in capabilities, and it is also stronger than other platforms in terms of ease of use and construction efficiency (for example, Langchain or Flowise need to build relatively complex orchestration logic to realize the expansion ability of large models to call Internet information, but Coze can be achieved by adding plugins directly and without specifying any parameters).

So I wanted to try to build a TIDB document assistant with Coze, and by the way, I would like to study and clarify how the Coze platform abstracts some large models and other capabilities to improve ease of use and build efficiency.

First of all, let's put aside the coze platform, how to call document data on the basis of the ability to provide large models?

Two modes are given here: knowledge base and function call. The advantage of a knowledge base is to have a relatively accurate approximation of non-real-time data, and the advantage of a function call is to get the latest data in real time, including document data.

The plugins in the Coze platform implement the function pattern, and also provide a knowledge base to manage local and ** documents.

1 embedding + vector library

Let's first introduce the way to enhance the capabilities of large models based on text representation model (embedding model) + vector database (vector db). It is mainly divided into two tasks:

Offline task (synchronizing the original document to the vector library):

i.Because the large model itself will have a token length limit, it is necessary to slice the original document now (the knowledge base capability of the Coze platform limits the content of each shard to a maximum of 800 tokens in the automatic segmentation mode).

ii .Embedding each shard is embedding each shard using the embedding model literal representation model, converting it into the form of a vector.

iii.Store vectors in a specific collection in a vector database

*Tasks (User Questions):

i.Use the embedding model to vectorize the user's problem.

ii.Based on the vector data of the user's problem, the vector database is requested to do an ann approximate nearest neighbor query and specify the return topk

iii.After getting the corresponding topk shard, we need to combine the shard content and user problems to piece together a complete prompt. The following example is shown below, where the quote is the sharded content of the document, and the question is the actual problem of the user.

Use the content in the markup as your knowledge:

Answer requirements: If you are unclear about the answer, you need to clarify.

Avoid mentioning that you are gaining knowledge from.

Keep the answers consistent with the descriptions in .

Use markdown syntax to refine the format of your answers.

Answer in the same language as the question.

Question:"}"

iv.Finally, request the large model and get the result.

In this knowledge-based model, the key ones are embedding models, vector databases, and prompts. Let's focus on embedding models and vector libraries.

1.1 embedding

If you try it yourself, embedding model recommends choosing HuggingFace open source model, the specific ranking HuggingFace is also available, you can see Massive Text Embedding Benchmark (MTEB) Leaderboard ( Chinese long text is currently ranked higher is tao-8k, vectorized dimension is 1024, the specific call example is as follows:

def tao_8k_embedding(sentences):

import torch.nn.functional as f

from transformers import automodel, autotokenizer

model = automodel.from_pretrained("tao-8k")

tokenizer = autotokenizer.from_pretrained("tao-8k")

batch_data = tokenizer(sentences,padding="longest",return_tensors="pt",max length=8192, turns off auto-truncation. The default value is true, that is, the text over 8192 tokens will be automatically truncated.

truncation="do_not_truncate", )

outputs = model(**batch_data)

vectors = outputs.last_hidden_state[:,0]

vectors = f.normalize(vectors, p=2, dim=1)

Of course, in addition to open source, such as Baichuan, OpenAI, ChatGPTM, Wenxin, etc., all provide Embedding APIs. OpenAI's documentation is as follows: embeddings ( For others, you can go to the official website to find the documentation.

1.2 Vector library

There are also many choices of vector libraries, and the open source ones are: domestic distributed architecturemilvus, standalone, stand-aloneqdrantand local-based and no-server chroma, etc.; Based on the existing database system, the vector capabilities are expandedelasticsearchp**ectorredisWait; There's even some dbaaS for vector libraries, such as:zilliz cloud。Aside from these applications, the core of the vector library is mainly three points: distance metric selection, vector dimension, and index type.

In the case of Qdrant, you can quickly build an image using docker. For the synchronization and query of the vector library, please refer to the qdrant interface documentation ().

docker pull qdrant/qdrant

docker run -p 6333:6333 -p 6334:6334 \\

v $(pwd)/qdrant_storage:/qdrant/storage:z \\

qdrant/qdrant

2 system + plug-in (function).The knowledge base-based model largely enables the ability of document Q&A, but there are also drawbacks:

You need to maintain a vector library, and if you use open source embedding to reduce costs, you need to maintain the embedding model locally.

Real-time document synchronization issues. Once the document is updated, it needs to be synchronized in time, otherwise the old data will be obtained.

Here is another way to set the system character + function call. The system is relatively simple, using a descriptive prompt to set the background, capabilities, goals, and other person-related information of the model; Function call is to define some extension capabilities for the large model, so that the large model can obtain data that it cannot obtain. Here's how to connect them together:

The user sets the system and function, and asks questions.

The server merges the composition parameters, maps the plug-in selected by the user to a function tool in the large model, and then requests the large model.

The large model determines whether the function needs to be called

If you don't need a function, the server can directly return the large model result.

If you need to call a function, the large model will return specific functions and parameter values, and the server will execute the function and feed back the results to the large model through its own networking capabilities.

After the large model gets the result of the function, it finally gives the user a clear answer.

2.1 function call

I won't introduce the system part, but let's talk about function call.

As mentioned earlier, the Coze platform's plugins use the ability of function calls, and let's take GitHub Plugin as an example to try to define it in the schema format of the function ( defined by OpenAI:

,"sort": ,"order": ,"required": ["q"

Now we know that OpenAI will make judgments through the functions we have defined in advance, and if we need the capabilities provided by the functions, the large model will give us a ** request, taking github-searchrepositories as an example, the specific execution is actually to call GitHub's OpenAPI ( to give its results to the large model.

We introduced the specific implementation method earlier, and let's quickly build the Tidb Help Bot on the Coze platform. But before we do it again, let's take a lookcloudwegohelpbotimplementation.

1 cloudwegohelpbot

First of all, let's introduce the construction steps, because I used the document assistant, so I refer to the CloudWegoHelpBot ( to see how it is built.

You can see that there are three main parts here:

persona & prompt: Set the persona, skills, constraints, and goals for the large model. corresponding to system.

plugins: GitHub query the plugin of the library, through GitHub's SearchRepositoriesAPI ( browser query the plugin of the web page, you can get the title, content and link. The part that corresponds to the function.

3 .Opening Dialog: Opening remarks, I personally feel that this part of the content does not participate in the interaction with the large model, and the function is to help users quickly understand the function and purpose of the bot.

tidbhelp bot

Now let's create a tidb help bot!

2.1 plugins

pluginsThe setup is similar to CloudWego Helpbot, using github-searchrepositories and browser-browse raw.

Inpersona & promptThe content needs to clarify the document address and library address of tidb, the template of cloudwegohelpbot is directly used here, and the corresponding information is changed to tidb, the example is as follows:

# role: tidb support and assistance bot

you're tidb help bot, the dedicated support for all things tidb. whether users are troubleshooting, seeking documentation, or h**e questions about tidb, tikv, pd and other sub-projects, you're here to assist. utilizing the official tidb documentation ()and github repositories (,you ensure users h**e access to the most accurate and up-to-date information. you provide a smooth and productive experience.

## skills

proficient in natural language processing to understand and respond to user queries effectively.

advanced web scraping capabilities to extract information from the official tidb documentation ()

integration with the official github repositories (,for real-time updates and issue tracking.

knowledge of tidb's sub-projects, such as tidb、tikv and pd, to provide specialized assistance.

user-friendly interface for clear communication and easy n**igation.

regular updates to maintain synchronization with the latest documentation and github repository changes.

## constraints

adhere to copyright laws and terms of use for the tidb documentation and github repository.

respect user privacy by **oiding the collection or storage of personal information.

clearly communicate that the bot is a support and information tool, and users should verify details from official sources.

*oid promoting or endorsing any form of illegal or unethical activities related to tidb or its sub-projects.

handle user data securely and ensure compliance with relevant privacy and data protection regulations.

## goals

provide prompt and accurate assistance to users with questions or issues related to tidb and its sub-projects.

offer detailed information from the official tidb documentation for comprehensive support.

integrate with the github repository to track and address user-reported issues effectively.

foster a positive and collaborative community around tidb by facilitating discussions and knowledge sharing.

ensure the bot contributes to a smooth and productive development experience for tidb users.

establish tidb help bot as a trusted and reliable resource for developers and contributors.

encourage user engagement through clear communication and proactive issue resolution.

continuously improve the bot's capabilities based on user feedback and evolving needs within the tidb community.

First of all, you need to add a knowledge base to the home page, it should be noted that the Coze platform is divided into text format and table format, the first one can only synchronize one document at a time, and the second can synchronize multiple documents at a time but needs to be in CSV or API return json format.

to synchronize the pingcap document center |Homepage] as an example, we can directly paste the homepage address through the online data in the text format.

The opening remarks and opening questions can be automatically generated on the Coze platform, as follows:

i'm tidb help bot, your dedicated support for all things tidb. whether you need troubleshooting assistance, documentation, or h**e questions about tidb, tikv, pd, and other sub-projects, i'm here to help. with access to the official tidb documentation and github repositories, i provide accurate and up-to-date information for a smooth and productive experience.

At this point, our tidb help bot is ready.

Related Pages