(docs) proxy quick start

This commit is contained in:
ishaan-jaff 2023-12-02 19:02:41 -08:00
parent 038433f0ad
commit a4c9e18eb5

View file

@ -88,9 +88,10 @@ print(response)
</Tabs> </Tabs>
## Quick Start - LiteLLM Proxy + Config.yaml ## Quick Start - LiteLLM Proxy + Config.yaml
The config allows you to create a model list and set `api_base`, `max_tokens` (all litellm params). See more details about the config [here](https://docs.litellm.ai/docs/proxy/configs)
### Create a Config for LiteLLM Proxy ### Create a Config for LiteLLM Proxy
Example Example config
```yaml ```yaml
model_list: model_list:
@ -112,7 +113,7 @@ model_list:
litellm --config your_config.yaml litellm --config your_config.yaml
``` ```
## Quick Start Docker Image ## Quick Start Docker Image: Github Container Registry
### Pull the litellm ghcr docker image ### Pull the litellm ghcr docker image
See the latest available ghcr docker image here: See the latest available ghcr docker image here:
@ -129,25 +130,26 @@ docker run ghcr.io/berriai/litellm:main-v1.10.0
#### Run the Docker Image with LiteLLM CLI args #### Run the Docker Image with LiteLLM CLI args
See all supported CLI args [here](https://docs.litellm.ai/docs/proxy/cli):
Here's how you can run the docker image and pass your config to `litellm` Here's how you can run the docker image and pass your config to `litellm`
```shell ```shell
docker run ghcr.io/berriai/litellm:main-v1.10.0 --config your_config.yaml docker run ghcr.io/berriai/litellm:main-v1.10.0 --config your_config.yaml
``` ```
Here's how you can run the docker image and pass your host, port to `litellm` Here's how you can run the docker image and start litellm on port 8002 with `num_workers=8`
Start litellm on port 8002
```shell ```shell
docker run ghcr.io/berriai/litellm:main-v1.10.0 --port 8002 --num_workers 8 docker run ghcr.io/berriai/litellm:main-v1.10.0 --port 8002 --num_workers 8
``` ```
### Server Endpoints ## Server Endpoints
- POST `/chat/completions` - chat completions endpoint to call 100+ LLMs - POST `/chat/completions` - chat completions endpoint to call 100+ LLMs
- POST `/completions` - completions endpoint - POST `/completions` - completions endpoint
- POST `/embeddings` - embedding endpoint for Azure, OpenAI, Huggingface endpoints - POST `/embeddings` - embedding endpoint for Azure, OpenAI, Huggingface endpoints
- GET `/models` - available models on server - GET `/models` - available models on server
- POST `/key/generate` - generate a key to access the proxy - POST `/key/generate` - generate a key to access the proxy
### Supported LLMs ## Supported LLMs
All LiteLLM supported LLMs are supported on the Proxy. Seel all [supported llms](https://docs.litellm.ai/docs/providers) All LiteLLM supported LLMs are supported on the Proxy. Seel all [supported llms](https://docs.litellm.ai/docs/providers)
<Tabs> <Tabs>
<TabItem value="bedrock" label="AWS Bedrock"> <TabItem value="bedrock" label="AWS Bedrock">