From a4c9e18eb52c6430e17d3bb6e52ee83a8bc7a47d Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Sat, 2 Dec 2023 19:02:41 -0800 Subject: [PATCH] (docs) proxy quick start --- docs/my-website/docs/proxy/quick_start.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/docs/my-website/docs/proxy/quick_start.md b/docs/my-website/docs/proxy/quick_start.md index 265b02bcc6..f2ae9ea22a 100644 --- a/docs/my-website/docs/proxy/quick_start.md +++ b/docs/my-website/docs/proxy/quick_start.md @@ -88,9 +88,10 @@ print(response) ## Quick Start - LiteLLM Proxy + Config.yaml +The config allows you to create a model list and set `api_base`, `max_tokens` (all litellm params). See more details about the config [here](https://docs.litellm.ai/docs/proxy/configs) ### Create a Config for LiteLLM Proxy -Example +Example config ```yaml model_list: @@ -112,7 +113,7 @@ model_list: litellm --config your_config.yaml ``` -## Quick Start Docker Image +## Quick Start Docker Image: Github Container Registry ### Pull the litellm ghcr docker image See the latest available ghcr docker image here: @@ -129,25 +130,26 @@ docker run ghcr.io/berriai/litellm:main-v1.10.0 #### Run the Docker Image with LiteLLM CLI args +See all supported CLI args [here](https://docs.litellm.ai/docs/proxy/cli): + Here's how you can run the docker image and pass your config to `litellm` ```shell docker run ghcr.io/berriai/litellm:main-v1.10.0 --config your_config.yaml ``` -Here's how you can run the docker image and pass your host, port to `litellm` -Start litellm on port 8002 +Here's how you can run the docker image and start litellm on port 8002 with `num_workers=8` ```shell docker run ghcr.io/berriai/litellm:main-v1.10.0 --port 8002 --num_workers 8 ``` -### Server Endpoints +## Server Endpoints - POST `/chat/completions` - chat completions endpoint to call 100+ LLMs - POST `/completions` - completions endpoint - POST `/embeddings` - embedding endpoint for Azure, OpenAI, Huggingface endpoints - GET `/models` - available models on server - POST `/key/generate` - generate a key to access the proxy -### Supported LLMs +## Supported LLMs All LiteLLM supported LLMs are supported on the Proxy. Seel all [supported llms](https://docs.litellm.ai/docs/providers)