Quick Start
Here is a documentation page that shows how to setup a same tool like https://tidb.ai from deployment to usage.
Step 1: Deployment
You can deploy self-hosted Autoflow on your server with Docker Compose.
Step 2: Configure
After deployment, you need to login to the admin dashboard to configure the tool withyour own settings.
Configure the LLM - Large Language Model
Go to the Models > LLMs page to configure the LLM model.
The LLM is used for extracting knowledge from docs and generating responses. You can change the default LLM to another one.
Configure the Embedding Model
Go to the Models > Embedding Models page to configure the embedding model.
The Embedding Model is a machine learning model that is trained to generate embeddings for a given input. We must translate text to vectors with this model before insert vector to database.
Configure the Reranker [Optional]
The Reranker is an essential tool that optimizes the order of results from initial searches. It is optional but recommended.
Go to the Models > Rerankers page to configure the reranker model.
Step 3: Add a New Knowledge Base and Upload Documents
Go to the Knowledge Base page to add a new knowledge base and upload documents.
After adding a new knowledge base, you can upload your documents from local or crawl from the web in the Data Source subpage.
After adding data source, there will be a period of time for indexing the data.
For more details, please refer to Knowledge Base documentation.
Step 4: Set up the Chat Engine
Go to the Chat Engines page to set up the chat engine.
The chat engine is used to chat with users.
Step 5: Usage
After deployment, configuration and uploading documents, you can use the tool to chat with users to answer their questions.
pingcap/autoflow provides several features to help you chat with users:
- Out-of-the-box chat interface, e.g. https://tidb.ai
- API to chat with users programmatically, e.g. https://tidb.ai/api-docs
- Embeddable chat widget to integrate with your website