LLM - Large Language Model
In this app, LLM is used for several purposes:
- Extracting knowledge from docs;
- Generating responses to user queries.
Configure LLM
After logging in with an admin account, you can configure the LLM in the admin panel.
-
Click on the
Models > LLMs
tab; -
Click on the
New LLM
button to add a new LLM; -
Input your LLM information and click
Create LLM
button; -
Done!
If you want to use the new LLM while answering user queries, you need switch to Chat Engines
tab and set the new LLM as LLM.
Supported LLM providers
Currently Autoflow supports the following LLM providers:
OpenAI
To learn more about OpenAI, please visit OpenAI.
Google Gemini
To learn more about Google Gemini, please visit Google Gemini.
Vertex AI
To learn more about Vertex AI, please visit Vertex AI.
Amazon Bedrock
To use Amazon Bedrock, you’ll need to provide a JSON Object of your AWS Credentials, as described in the AWS CLI config global settings:
{
"aws_access_key_id": "****",
"aws_secret_access_key": "****",
"aws_region_name": "us-west-2"
}
To learn more about Amazon Bedrock, please visit Amazon Bedrock.
Gitee AI
Follow the UI to configure the Gitee AI provider. To learn more about Gitee AI, please visit Gitee AI.
OpenAI-Like
Autoflow also support the providers that conform to the OpenAI API specification.
To use OpenAI-Like LLM providers, you need to provide the api_base of the LLM API as the following JSON format in Advanced Settings:
{
"api_base": "{api_base_url}"
}
OpenRouter
Default config:
{
"api_base": "https://openrouter.ai/api/v1/"
}
To learn more about OpenRouter, please visit OpenRouter.
Ollama
Default config:
{
"api_base": "http://localhost:11434"
}
To learn more about Ollama, please visit Ollama.
vLLM
Default config:
{
"api_base": "http://localhost:8000/v1/"
}
To learn more about vLLM, please visit vLLM.
Xinference
If you assigned a model uid different from the model name, you need to fill in model uid in the box model
.
Default config:
{
"api_base": "http://localhost:9997/v1/"
}
To learn more about Xinference, please visit Xinference.
Azure OpenAI
To learn more about Azure OpenAI, please visit:
After creating the Azure OpenAI Service resource, you can configure the API base URL in the Advanced Settings:
{
"azure_endpoint": "https://<your-resource-name>.openai.azure.com/",
"api_version": "<your-api-version>",
"engine": "<your-deployment-name>"
}
You can find those parameters in the Deployment Tab of your Azure OpenAI Service resource.
Do not mix Model version
and API version
up, they are different.
Novita AI
Default config:
{
"api_base": "https://api.novita.ai/v3/openai"
}
To learn more about Novita AI, please visit Novita AI.
DeepSeek
DeepSeek provides chat model deepseek-chat
.
Default config:
{
"api_base": "https://api.deepseek.com/v1",
"is_chat_model": true
}
To learn more about DeepSeek, please visit DeepSeek.