forked from phoenix/litellm-mirror
(docs) proxy quick start
This commit is contained in:
parent
038433f0ad
commit
a4c9e18eb5
1 changed files with 8 additions and 6 deletions
|
@ -88,9 +88,10 @@ print(response)
|
|||
</Tabs>
|
||||
|
||||
## Quick Start - LiteLLM Proxy + Config.yaml
|
||||
The config allows you to create a model list and set `api_base`, `max_tokens` (all litellm params). See more details about the config [here](https://docs.litellm.ai/docs/proxy/configs)
|
||||
|
||||
### Create a Config for LiteLLM Proxy
|
||||
Example
|
||||
Example config
|
||||
|
||||
```yaml
|
||||
model_list:
|
||||
|
@ -112,7 +113,7 @@ model_list:
|
|||
litellm --config your_config.yaml
|
||||
```
|
||||
|
||||
## Quick Start Docker Image
|
||||
## Quick Start Docker Image: Github Container Registry
|
||||
|
||||
### Pull the litellm ghcr docker image
|
||||
See the latest available ghcr docker image here:
|
||||
|
@ -129,25 +130,26 @@ docker run ghcr.io/berriai/litellm:main-v1.10.0
|
|||
|
||||
#### Run the Docker Image with LiteLLM CLI args
|
||||
|
||||
See all supported CLI args [here](https://docs.litellm.ai/docs/proxy/cli):
|
||||
|
||||
Here's how you can run the docker image and pass your config to `litellm`
|
||||
```shell
|
||||
docker run ghcr.io/berriai/litellm:main-v1.10.0 --config your_config.yaml
|
||||
```
|
||||
|
||||
Here's how you can run the docker image and pass your host, port to `litellm`
|
||||
Start litellm on port 8002
|
||||
Here's how you can run the docker image and start litellm on port 8002 with `num_workers=8`
|
||||
```shell
|
||||
docker run ghcr.io/berriai/litellm:main-v1.10.0 --port 8002 --num_workers 8
|
||||
```
|
||||
|
||||
### Server Endpoints
|
||||
## Server Endpoints
|
||||
- POST `/chat/completions` - chat completions endpoint to call 100+ LLMs
|
||||
- POST `/completions` - completions endpoint
|
||||
- POST `/embeddings` - embedding endpoint for Azure, OpenAI, Huggingface endpoints
|
||||
- GET `/models` - available models on server
|
||||
- POST `/key/generate` - generate a key to access the proxy
|
||||
|
||||
### Supported LLMs
|
||||
## Supported LLMs
|
||||
All LiteLLM supported LLMs are supported on the Proxy. Seel all [supported llms](https://docs.litellm.ai/docs/providers)
|
||||
<Tabs>
|
||||
<TabItem value="bedrock" label="AWS Bedrock">
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue