(docs) simple proxy

This commit is contained in:
ishaan-jaff 2023-11-29 16:38:36 -08:00
parent 2d0432c5b7
commit 4b78481fbd

View file

@ -235,7 +235,7 @@ $ litellm --model command-nightly
</Tabs> </Tabs>
<!--
## Using with OpenAI compatible projects ## Using with OpenAI compatible projects
Set `base_url` to the LiteLLM Proxy server Set `base_url` to the LiteLLM Proxy server
@ -359,7 +359,7 @@ result = experts(query='How can I be more productive?')
print(result) print(result)
``` ```
</TabItem> </TabItem>
</Tabs> </Tabs> -->
## Proxy Configs ## Proxy Configs
The Config allows you to set the following params The Config allows you to set the following params
@ -374,14 +374,24 @@ The Config allows you to set the following params
#### Example Config #### Example Config
```yaml ```yaml
model_list: model_list:
- model_name: zephyr-alpha - model_name: gpt-3.5-turbo
litellm_params: # params for litellm.completion() - https://docs.litellm.ai/docs/completion/input#input---request-body
model: huggingface/HuggingFaceH4/zephyr-7b-alpha
api_base: http://0.0.0.0:8001
- model_name: zephyr-beta
litellm_params: litellm_params:
model: huggingface/HuggingFaceH4/zephyr-7b-beta model: azure/gpt-turbo-small-eu
api_base: https://<my-hosted-endpoint> api_base: https://my-endpoint-europe-berri-992.openai.azure.com/
api_key:
rpm: 6 # Rate limit for this deployment: in requests per minute (rpm)
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/gpt-turbo-small-ca
api_base: https://my-endpoint-canada-berri992.openai.azure.com/
api_key:
rpm: 6
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/gpt-turbo-large
api_base: https://openai-france-1234.openai.azure.com/
api_key:
rpm: 1440
litellm_settings: litellm_settings:
drop_params: True drop_params: True