forked from phoenix/litellm-mirror
(fix) clean up quick start
This commit is contained in:
parent
007d439017
commit
3612e47d6b
1 changed files with 0 additions and 68 deletions
|
@ -363,74 +363,6 @@ print(query_result[:5])
|
|||
- GET `/models` - available models on server
|
||||
- POST `/key/generate` - generate a key to access the proxy
|
||||
|
||||
## Quick Start Docker Image: Github Container Registry
|
||||
|
||||
### Pull the litellm ghcr docker image
|
||||
See the latest available ghcr docker image here:
|
||||
https://github.com/berriai/litellm/pkgs/container/litellm
|
||||
|
||||
```shell
|
||||
docker pull ghcr.io/berriai/litellm:main-latest
|
||||
```
|
||||
|
||||
### Run the Docker Image
|
||||
```shell
|
||||
docker run ghcr.io/berriai/litellm:main-latest
|
||||
```
|
||||
|
||||
#### Run the Docker Image with LiteLLM CLI args
|
||||
|
||||
See all supported CLI args [here](https://docs.litellm.ai/docs/proxy/cli):
|
||||
|
||||
Here's how you can run the docker image and pass your config to `litellm`
|
||||
```shell
|
||||
docker run ghcr.io/berriai/litellm:main-latest --config your_config.yaml
|
||||
```
|
||||
|
||||
Here's how you can run the docker image and start litellm on port 8002 with `num_workers=8`
|
||||
```shell
|
||||
docker run ghcr.io/berriai/litellm:main-latest --port 8002 --num_workers 8
|
||||
```
|
||||
|
||||
#### Run the Docker Image using docker compose
|
||||
|
||||
**Step 1**
|
||||
|
||||
- (Recommended) Use the example file `docker-compose.example.yml` given in the project root. e.g. https://github.com/BerriAI/litellm/blob/main/docker-compose.example.yml
|
||||
|
||||
- Rename the file `docker-compose.example.yml` to `docker-compose.yml`.
|
||||
|
||||
Here's an example `docker-compose.yml` file
|
||||
```yaml
|
||||
version: "3.9"
|
||||
services:
|
||||
litellm:
|
||||
image: ghcr.io/berriai/litellm:main
|
||||
ports:
|
||||
- "4000:4000" # Map the container port to the host, change the host port if necessary
|
||||
volumes:
|
||||
- ./litellm-config.yaml:/app/config.yaml # Mount the local configuration file
|
||||
# You can change the port or number of workers as per your requirements or pass any new supported CLI augument. Make sure the port passed here matches with the container port defined above in `ports` value
|
||||
command: [ "--config", "/app/config.yaml", "--port", "4000", "--num_workers", "8" ]
|
||||
|
||||
# ...rest of your docker-compose config if any
|
||||
```
|
||||
|
||||
**Step 2**
|
||||
|
||||
Create a `litellm-config.yaml` file with your LiteLLM config relative to your `docker-compose.yml` file.
|
||||
|
||||
Check the config doc [here](https://docs.litellm.ai/docs/proxy/configs)
|
||||
|
||||
**Step 3**
|
||||
|
||||
Run the command `docker-compose up` or `docker compose up` as per your docker installation.
|
||||
|
||||
> Use `-d` flag to run the container in detached mode (background) e.g. `docker compose up -d`
|
||||
|
||||
|
||||
Your LiteLLM container should be running now on the defined port e.g. `4000`.
|
||||
|
||||
|
||||
## Using with OpenAI compatible projects
|
||||
Set `base_url` to the LiteLLM Proxy server
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue