Features Description
- 🎨 Brand new UI interface (some interfaces are yet to be updated)
- 🌍 Multi-language support (to be improved)
- 🎨 Added Midjourney-Proxy(Plus) API support (Before use, please confirm third-party project and upstream service authorization, content security, and terms of service requirements)
- 💰 Supports internal balance, cost allocation, or enterprise customer account management in legally authorized deployments, configurable in system settings:
- Epay
- 🔍 Supports querying usage or balance information of legally authorized channels:
- In conjunction with the project new-api-key-tool, usage can be queried by key.
- 📑 Pagination supports selecting the number of items displayed per page
- 🔄 Supports SQLite database storage, out-of-the-box, lightweight and convenient
- 💵 Supports internal cost accounting or enterprise customer billing, configurable in System Settings - Operations Settings
- ⚖️ Supports channel Weighted Random
- 📈 Data Dashboard (Console)
- 🔒 Can set models that a token can call
- 🤖 Supports Telegram authorized login:
- System Settings - Configure Login & Registration - Allow Login via Telegram
- Enter command /setdomain to @Botfather
- Select your bot, then enter http(s)://your_website_address/login
- The Telegram Bot name is the bot username string without the @.
- 🎵 Added Suno API API support (Before use, please confirm third-party project and upstream service authorization, content security, and terms of service requirements)
- 🔄 Supports Rerank models, currently compatible with Cohere and Jina, and can be integrated with Dify
- ⚡ OpenAI Realtime API - Supports OpenAI's Realtime API, supports Azure channels
- Supports using the route /chat2link to enter the chat interface
- 🧠 Supports setting reasoning effort via model name suffix:
- OpenAI o-series models
- Add suffix
-highto set as high reasoning effort (e.g.,o3-mini-high) - Add suffix
-mediumto set as medium reasoning effort (e.g.,o3-mini-medium) - Add suffix
-lowto set as low reasoning effort (e.g.,o3-mini-low)
- Add suffix
- Claude thinking models
- Add suffix
-thinkingto enable thinking mode (e.g.,claude-3-7-sonnet-20250219-thinking)
- Add suffix
- OpenAI o-series models
- 🔄 Thinking to Content: Supports setting the
thinking_to_contentoption inChannel - Edit - Channel Extra Settings. Default isfalse. When enabled, it will convert the thinking contentreasoning_contentinto a<think>tag and append it to the returned content. - 🔄 Model Rate Limiting: Supports setting model rate limiting in
System Settings - Rate Limit Settings, including total request limit and successful request limit. - 💰 Cache Billing Support: When enabled, billing can be performed according to the set ratio upon cache hit:
- Set the
Prompt Cache Ratiooption inSystem Settings - Operations Settings. - Set the
Prompt Cache Ratioin the channel, range 0-1. For example, setting it to 0.5 means billing at 50% upon cache hit. - Supported channels:
- OpenAI
- Azure
- DeepSeek
- Claude
- Set the
How is this guide?