forked from phoenix/litellm-mirror
Update README.md
This commit is contained in:
parent
402b2e5733
commit
bedcd22f40
1 changed files with 0 additions and 1 deletions
|
@ -27,7 +27,6 @@
|
||||||
LiteLLM manages
|
LiteLLM manages
|
||||||
- Translating inputs to the provider's `completion` and `embedding` endpoints
|
- Translating inputs to the provider's `completion` and `embedding` endpoints
|
||||||
- Guarantees [consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']`
|
- Guarantees [consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']`
|
||||||
- Exception mapping - common exceptions across providers are mapped to the OpenAI exception types.
|
|
||||||
- Load-balance across multiple deployments (e.g. Azure/OpenAI) - `Router` **1k+ requests/second**
|
- Load-balance across multiple deployments (e.g. Azure/OpenAI) - `Router` **1k+ requests/second**
|
||||||
|
|
||||||
# OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
|
# OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue