mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
docs(routing.md): update routing docs
This commit is contained in:
parent
fd3895878d
commit
826f56a6a0
3 changed files with 15 additions and 11 deletions
|
@ -247,16 +247,24 @@ print(f"response: {response}")
|
|||
|
||||
If you want a server to just route requests to different LLM APIs, use our [OpenAI Proxy Server](./simple_proxy.md#multiple-instances-of-1-model)
|
||||
|
||||
## Queuing
|
||||
|
||||
### Quick Start
|
||||
## Queuing (Beta)
|
||||
|
||||
This requires a [Redis DB](https://redis.com/) to work.
|
||||
|
||||
Our implementation uses LiteLLM's proxy server + Celery workers to process up to 100 req./s
|
||||
|
||||
:::info
|
||||
|
||||
This is pretty new, and might have bugs. Any contributions to improving our implementation are welcome
|
||||
|
||||
:::
|
||||
|
||||
|
||||
[**See Code**](https://github.com/BerriAI/litellm/blob/fbf9cab5b9e35df524e2c9953180c58d92e4cd97/litellm/proxy/proxy_server.py#L589)
|
||||
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Add Redis credentials in a .env file
|
||||
|
||||
```python
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue