(Docs) OpenWeb x LiteLLM Docker compose + Instructions on spend tracking + logging (#9059)

* docs improve open web ui litellm doc

* docs openweb show teams + keys

* docs open web ui litellm
This commit is contained in:
Ishaan Jaff 2025-03-07 17:01:39 -08:00 committed by GitHub
parent e753f61231
commit b5eeafdd72
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 48 additions and 15 deletions

View file

@ -8,27 +8,55 @@ This guide walks you through connecting OpenWeb UI to LiteLLM. Using LiteLLM wit
- Access 100+ LLMs on OpenWeb UI - Access 100+ LLMs on OpenWeb UI
- Track Spend / Usage, Set Budget Limits - Track Spend / Usage, Set Budget Limits
- Send Request/Response Logs to logging destinations like langfuse, s3, gcs buckets, etc. - Send Request/Response Logs to logging destinations like langfuse, s3, gcs buckets, etc.
- Set access controls eg. Team 1 can only access gpt-4o, Team 2 can only access o1-preview - Set access controls eg. Control what models OpenWebUI can access.
## Quickstart ## Quickstart
- Make sure to setup LiteLLM with the [LiteLLM Getting Started Guide](https://docs.litellm.ai/docs/proxy/docker_quick_start) - Make sure to setup LiteLLM with the [LiteLLM Getting Started Guide](https://docs.litellm.ai/docs/proxy/docker_quick_start)
### 1. Connect OpenWeb UI to LiteLLM ## 1. Start LiteLLM & OpenWebUI
Deploy this docker compose to deploy both OpenWebUI and LiteLLM.
```bash
docker compose up -d
```
- OpenWebUI starts running on [http://localhost:3000](http://localhost:3000) - OpenWebUI starts running on [http://localhost:3000](http://localhost:3000)
- LiteLLM starts running on [http://localhost:4000](http://localhost:4000) - LiteLLM starts running on [http://localhost:4000](http://localhost:4000)
### 2. Create a Virtual Key on LiteLLM
## 2. Create a Virtual Key on LiteLLM
Virtual Keys are API Keys that allow you to authenticate to LiteLLM Proxy. We will create a Virtual Key that will allow OpenWebUI to access LiteLLM.
### 2.1 LiteLLM User Management Hierarchy
On LiteLLM, you can create Organizations, Teams, Users and Virtual Keys. For this tutorial, we will create a Team and a Virtual Key.
- `Organization` - An Organization is a group of Teams. (US Engineering, EU Developer Tools)
- `Team` - A Team is a group of Users. (OpenWeb UI Team, Data Science Team, etc.)
- `User` - A User is an individual user (employee, developer, eg. `krrish@litellm.ai`)
- `Virtual Key` - A Virtual Key is an API Key that allows you to authenticate to LiteLLM Proxy. A Virtual Key is associated with a User or Team.
Once the Team is created, you can invite Users to the Team. You can read more about LiteLLM's User Management [here](https://docs.litellm.ai/docs/proxy/user_management_heirarchy).
### 2.2 Create a Team on LiteLLM
Navigate to [http://localhost:4000/ui](http://localhost:4000/ui) and create a new team.
<Image img={require('../../img/litellm_create_team.gif')} />
### 2.2 Create a Virtual Key on LiteLLM
Navigate to [http://localhost:4000/ui](http://localhost:4000/ui) and create a new virtual Key. Navigate to [http://localhost:4000/ui](http://localhost:4000/ui) and create a new virtual Key.
LiteLLM allows you to specify what models are available on OpenWeb UI (by specifying the models the key will have access to). LiteLLM allows you to specify what models are available on OpenWeb UI (by specifying the models the key will have access to).
<Image img={require('../../img/litellm_make_key_openweb.gif')} /> <Image img={require('../../img/create_key_in_team_oweb.gif')} />
### 3. Connect OpenWeb UI to LiteLLM ## 3. Connect OpenWeb UI to LiteLLM
On OpenWeb UI, navigate to Settings -> Connections and create a new connection to LiteLLM On OpenWeb UI, navigate to Settings -> Connections and create a new connection to LiteLLM
@ -38,7 +66,7 @@ Enter the following details:
<Image img={require('../../img/litellm_setup_openweb.gif')} /> <Image img={require('../../img/litellm_setup_openweb.gif')} />
### 4. Test it ### 3.1 Test Request
On the top left corner, select models you should only see the models you gave the key access to in Step 2. On the top left corner, select models you should only see the models you gave the key access to in Step 2.
@ -46,6 +74,14 @@ Once you selected a model, enter your message content and click on `Submit`
<Image img={require('../../img/basic_litellm.gif')} /> <Image img={require('../../img/basic_litellm.gif')} />
### 3.2 Tracking Spend / Usage
After your request is made, navigate to `Logs` on the LiteLLM UI, you can see Team, Key, Model, Usage and Cost.
<!-- <Image img={require('../../img/litellm_logs_openweb.gif')} /> -->
## Render `thinking` content on OpenWeb UI ## Render `thinking` content on OpenWeb UI
OpenWebUI requires reasoning/thinking content to be rendered with `<think></think>` tags. In order to render this for specific models, you can use the `merge_reasoning_content_in_choices` litellm parameter. OpenWebUI requires reasoning/thinking content to be rendered with `<think></think>` tags. In order to render this for specific models, you can use the `merge_reasoning_content_in_choices` litellm parameter.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 MiB

After

Width:  |  Height:  |  Size: 13 MiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 MiB

View file

@ -1,13 +1,10 @@
model_list: model_list:
- model_name: model-1 - model_name: thinking-us.anthropic.claude-3-7-sonnet-20250219-v1:0
litellm_params: litellm_params:
model: openai/model-1 model: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
api_key: os.environ/OPENAI_API_KEY_MODEL_1 thinking: {"type": "enabled", "budget_tokens": 1024}
api_base: https://exampleopenaiendpoint-production.up.railway.app/ max_tokens: 1080
- model_name: model-2 merge_reasoning_content_in_choices: true
litellm_params:
model: openai/model-2
api_key: os.environ/OPENAI_API_KEY_MODEL_2
api_base: https://exampleopenaiendpoint-production.up.railway.app/